Coder Social home page Coder Social logo

rlgc-project / rlgc Goto Github PK

View Code? Open in Web Editor NEW
104.0 4.0 28.0 465.22 MB

An open-source platform for applying Reinforcement Learning for Grid Control (RLGC)

License: Other

Jupyter Notebook 80.68% Python 5.10% Java 14.22%
reinforcement-learning power-grids control electrical-engineering optimal-control grid-environment openai-gym grid-control

rlgc's Introduction

RLGC

Repo of the Reinforcement Learning for Grid Control (RLGC) Project.

In this project, we explore to use deep reinforcement learning methods for control and decision-making problems in power systems. We leverage the InterPSS simulation platform (http://www.interpss.org/) as the power system simulator. We develop an OpenAI gym (https://gym.openai.com/) compatible power grid dynamic simulation environment for developing, testing and benchmarking reinforcement learning algorithms for grid control.

NOTE: RLGC is under active development and may change at any time. Feel free to provide feedback and comments.


Environment setup

To run the training, you need python 3.5 or above and Java 8. Unix-based OS is recommended. We suggest using Anaconda to create virtual environment from the yaml file we provided.

  • To clone our project

    git clone https://github.com/RLGC-Project/RLGC.git
    
  • To create the virtual environment
    In case you like to use our development environment, we have provided environment. yml file

    cd RLGC
    conda env create -f environment.yml
    

    or you can create your own environment. The main dependent modules/libs include gym, tensorflow, py4j, numpy, matplotlib, stable-baselines,jupyter-notebooks

    cd RLGC    
    conda env create --name <your-env-name>  
    

    If you get errors about OpenAI gym you probably need to install cmake and zlib1g-dev. For example on Ubuntu machine, do the following command.

    sudo apt-get upgrade
    sudo apt-get install cmake
    sudo apt-get install zlib1g-dev
    

    After creating environment , you can activate the virtual environment and do development under this environment.

  • To activate virtual environment

    source activate <your-env-name>  
    
  • To deactivate virtual environment

    source deactivate
    

Training

  • With the RLGCJavaSever version 0.80 or newer and grid environment definition version 5 (PowerDynSimEnvDef_v5.py) or newer, users don't need to start the java server explicitly. The server will be started automatically when the grid environment PowerDynSimEnv is created.
  • To launch the training, you need first activate the virtual environment. Then run the training scripts under the folder
source activate <your-env-name> 
cd RLGC/examples/IEEE39_load_shedding/  
python trainIEEE39LoadSheddingAgent_discrete_action.py 

During the training the screen will dump out the training log. After training, you can deactivate the virtual environment by

source deactivate

Check training results and test trained model

Two Jupyter notebooks (with Linux and Windows versions-- directory paths are specified differently) are provided as examples for checking training results and testing trained RL model.

Customize the grid environment for training and testing

If you want to develop a new grid environment for RL training or customize the existing grid environment (e.g. IEEE 39-bus system for load shedding), the simplest way is through providing your own cases and configuration files.

When you open trainIEEE39LoadSheddingAgent_discrete_action.py you will notice the following codes:

case_files_array =[]
case_files_array.append(repo_path + '/testData/IEEE39/IEEE39bus_multiloads_xfmr4_smallX_v30.raw')
case_files_array.append(repo_path + '/testData/IEEE39/IEEE39bus_3AC.dyr')

....
# configuration files for dynamic simulation and RL
dyn_config_file = repo_path + '/testData/IEEE39/json/IEEE39_dyn_config.json'
rl_config_file = repo_path + '/testData/IEEE39/json/IEEE39_RL_loadShedding_3motor_2levels.json'

env = PowerDynSimEnv(case_files_array,dyn_config_file,rl_config_file, jar_path, java_port)

They are to specify the cases and configuration files for dynamic simulation and RL training. You can develop your environment by following these examples. Since PowerDynSimEnv is defined based on OpenAI Gym environment definition, once the environment is created, you can use it like other Gym environments, and seamlessly interface it with RL algorithms provided in OpenAI baselines or Stable baselines


Citation

If you use this code please cite it as:

@article{huang2019adaptive,
  title={Adaptive Power System Emergency Control using Deep Reinforcement Learning},
  author={Huang, Qiuhua and Huang, Renke and Hao, Weituo and Tan, Jie and Fan, Rui and Huang, Zhenyu},
  journal={IEEE Transactions on Smart Grid},
  year={2019},
  publisher={IEEE}
}

Communication

If you spot a bug or have a problem running the code, please open an issue.

Please direct other correspondence to Qiuhua Huang: qiuhua DOT huang AT pnnl DOT gov

rlgc's People

Contributors

qhuang-pnl avatar rl4grid avatar thuang avatar weituo12321 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rlgc's Issues

about storedData

When I run the train files like "trainKundur2areaGenBrakingAgent", "trainIEEE39LoadSheddingAgent_continuous_action_DDPG" or "trainIEEE39LoadSheddingAgent_discrete_action", I found generated npy.files in storedData is 1kB and all the step_* variables like "step_rewards" become [].
Thus I cannot check it as the .ipynb files you provided.
I have already build the environment uses the yml environment.

Method __getstate__([]) does not exist & Training can't stop

Hi,
I encountered some problems that could not be solved during the recurrence process. I hope you can give me some help.

  • First, trainIEEE39LoadSheddingAgent.py

In the process of running the code, I got an error:

py4j.protocol.Py4JError: An error occurred while calling t.__getstate__. Trace:
py4j.Py4JException: Method __getstate__([]) does not exist
	at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
	at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326)
	at py4j.Gateway.invoke(Gateway.java:274)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:214)
	at java.lang.Thread.run(Thread.java:744)

And the Traceback:

Traceback (most recent call last):
  File "trainIEEE39LoadSheddingAgent.py", line 145, in <module>
    train(ll, env, model_path)
  File "trainIEEE39LoadSheddingAgent.py", line 105, in train
    act.save(savedModel + "/" + model_name + "_lr_%s_100w.pkl" % (str(learning_rate)))
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/baselines/deepq/simple.py", line 55, in save
    dill.dump((model_data, self._act_params), f)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/dill/dill.py", line 274, in dump
    pik.dump(obj)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 409, in dump
    self.save(obj)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 736, in save_tuple
    save(element)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/dill/dill.py", line 871, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 821, in save_dict
    self._batch_setitems(obj.items())
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 847, in _batch_setitems
    save(v)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/dill/dill.py", line 1355, in save_function
    obj.__dict__), obj=obj)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 610, in save_reduce
    save(args)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 751, in save_tuple
    save(element)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 736, in save_tuple
    save(element)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/dill/dill.py", line 1098, in save_cell
    pickler.save_reduce(_create_cell, (f,), obj=obj)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 610, in save_reduce
    save(args)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 736, in save_tuple
    save(element)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 521, in save
    self.save_reduce(obj=obj, *rv)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 634, in save_reduce
    save(state)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/dill/dill.py", line 871, in save_module_dict
    StockPickler.save_dict(pickler, obj)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 821, in save_dict
    self._batch_setitems(obj.items())
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 847, in _batch_setitems
    save(v)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/pickle.py", line 496, in save
    rv = reduce(self.proto)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/py4j/java_gateway.py", line 1160, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/root/anaconda3/envs/py3ml/lib/python3.6/site-packages/py4j/protocol.py", line 324, in get_return_value
    format(target_id, ".", name, value))

  • Second, python trainKundur2areaGenBrakingAgent.py cannot be stopped.

Episodes have been increasing until they are shut down by the system.
% time spent exploring always show 2 in the above situation.

  • Description
    These are all based on the environment version required by the RLGC-project.
    And I am trying to use the latest version of gym(0.14.0 & 0.15.3). However this still does not work.
    Due to the limited level, I really can't solve these problems, I hope to know which of my mistakes have caused such a result.
    Thank you for taking the time to view this issue.

DQN can't find a good policy

According your advice, I switched to Stable-Baselines instead of openAI baseline in the Kundur system training.

def main(learning_rate, env):
    tf.reset_default_graph()  
    graph = tf.get_default_graph()

    model = DQN(CustomDQNPolicy, env, learning_rate=learning_rate, verbose=0)
    callback = SaveOnBestTrainingRewardCallback(check_freq=1000, storedData=storedData)
    time_steps = 900000
    model.learn(total_timesteps=int(time_steps), callback=callback)

    print("Saving final model to: " + savedModel + "/" + model_name + "_lr_%s_90w.pkl" % (str(learning_rate)))
    model.save(savedModel + "/" + model_name + "_lr_%s_90w.pkl" % (str(learning_rate)))

However after 900000 steps of training DQN agent cannot find a good policy. Please see average reward progress plot

https://www.dropbox.com/preview/DQN_adaptivenose.png?role=personal

I used the following env settings

case_files_array.append(folder_dir +'/testData/Kundur-2area/kunder_2area_ver30.raw')
case_files_array.append(folder_dir+'/testData/Kundur-2area/kunder_2area.dyr')
dyn_config_file = folder_dir+'/testData/Kundur-2area/json/kundur2area_dyn_config.json'
rl_config_file = folder_dir+'/testData/Kundur-2area/json/kundur2area_RL_config_multiStepObsv.json'

Mu suggestion is that in the baseline scenario kunder_2area_ver30.raw (without system loading), short circuit might not lead to loss of stability during the simulation. Therefore, (perhaps) DQN agent finds a "no action" policy, that so as not to receive the actionPenalty = 2.0. Because according the reward progress plot, during training agent cannot find a policy better than mean reward 603.05. When testing, mean_reward = 603.05 means "no action" policy (please see figure bellow)

https://www.dropbox.com/preview/no%20actions%20case.png?role=personal

However it's only my suggestion, I can wrong. I thought to try scenarios with increasing load in order to get for sure loss of stability during simulation.

Originally posted by @frostyduck in #9 (comment)

Bug in the 2-area Kundur case

I've had some free time in the last few days and I probably figured out a bug where, during training, the agent cannot overcome the reward boundary of -602 in your Kundur 2-area case. The fact is that during training and testing in the environment (Kundur's scheme), short circuits are not simulated. I checked it out. That is, the agent learns purely on the normal operating conditions of the system. In this case, the optimal policy is never to apply the dynamic brake, i.e. actions are always 0, which corresponds to the specified value of the reward (-602 or 603).

I'm guessing it has something to do with the PowerDynSimEnvDef modifications. Initially, you used PowerDynSimEnvDef_v2, and now I am working with PowerDynSimEnvDef_v7.

RLGC project code structure

Dear Qinhua,
I was always struggling to connect the power system simulation tool (like PSAT, BPA) to a public available RL environment like Gym. Your work was amazing. I'd like to know the boundary of the project, mainly on the power system simulation part. To be more specific,

  1. Does the project support electromagnetic transient simulation?
  2. If I'd like to run RL on my own power networks, what part of the case files should I change? And do the model definition files have any documentation to guide me if I want to build my own power network model?
  3. I figure out that the functions you mentioned in Section III.B of your trans paper, like initStudyCase, applyAction, nextStepDynSim were all written in Java in the folder /org/pnnl/gov/pss_gateway. Are the Java files the bridge to connect the RL environment and the power system simulation software? Do you contain the power system simulation software in the repo?
  4. If I'd like to have more actions or build my own rewards, what codes should I change? Are those Java files used to define the actions and rewards?

Thank you again for the great benchmark.

Issue regarding accessing JAR file

Tried running test_IEEE300_Zone1_loadshedding.py and keep getting Error: Unable to access jarfile. Tried downloading java as well but did not solve the issue.

about yaml file

when I try to create virtual environemnt from the yaml file you privided, I encounter error like ResolvePackagesNotFound. I guess the YAML includes platform-specific build constraints, so it fails when I'm transferring across platform.

RLGC latest version cannot run examples

py4j.protocol.Py4JError: An error occurred while calling t.getStudyCases. Trace:
py4j.Py4JException: Method getStudyCases([]) does not exist

I have created two conda environment. The scripts of old version RLGC can run in two environments, while the scripts of latest version RLGC always have errors. How to deal with this problem? I

Fail to run this project...

Hi, thank you very much for being able to provide such a platform. Here I have some questions to ask about how to successfully run this project.
First of all, I have read in detail about the "READ ME" and have installed Anaconda3 and JDK-1.8 on Ubuntu. But whenever I create the environment I get an error similar to "ResolvePackagesNotFound". I solved this problem through google, but still couldn't run this project. When I follow the instructions in the README, it keeps prompting me with errors like "no module named py4j". (I know this is because this module was not downloaded, but shouldn't this be included in the .yml file?) So I want to ask if I need some other default conditions, such as the configuration or system of the reinforcement learning environment in python Parameter settings, etc.?

Cannot test codes due to outdated documentation

Documentation for running codes:

source activate
cd RLGC/src/py
python trainIEEE39LoadSheddingAgent_discrete_action.py

Problems and custom fixes:

  1. There is no file folder src/ as py/
  2. I found "trainIEEE39LoadSheddingAgent_discrete_action.py " under examples/IEEE39_load_shedding
  3. Running the file "trainIEEE39LoadSheddingAgent_discrete_action.py" throws import error
    ModuleNotFoundError: No module named 'PowerDynSimEnvDef_v7' because of code snippet
    from PowerDynSimEnvDef_v7 import PowerDynSimEnv
  4. I found PowerDynSimEnvDef_v7 under \src\environments
  5. Copied PowerDynSimEnvDef_v7 to examples/IEEE39_load_shedding and tried running trainIEEE39LoadSheddingAgent_discrete_action.py
  6. Got error Error: Unable to access jarfile .....IEEE39_load_shedding/lib/RLGCJavaServer1.0.0_alpha.jar
  7. Found RLGCJavaServer1.0.0_alpha.jar under \RLGC\lib\ and copied to examples/IEEE39_load_shedding/lib/
  8. Finally got an error from Py4JJava error
    y4j.protocol.Py4JJavaError: An error occurred while calling t.initStudyCase.
    : java.io.FileNotFoundException: C:\Users\0000011369979\Documents\Work\FY2020_2\kochi\RLGC\RLGC_env\RLGC\examples\IEEE39_load_shedding\testData\IEEE39\json\IEEE39_dyn_config.json (The system cannot find the path specified)
    at java.base/java.io.FileInputStream.open0(Native Method)
    at java.base/java.io.FileInputStream.open(FileInputStream.java:213)
    at java.base/java.io.FileInputStream.(FileInputStream.java:155)
    ...
  9. It was file path error again, copied IEEE39 from testData to IEEE39_load_shedding/testData/
  10. Encountered error
    Traceback (most recent call last):
    File "C:/Users/0000011369979/Documents/Work/FY2020_2/kochi/RLGC/RLGC_env/RLGC/examples/IEEE39_load_shedding/trainIEEE39LoadSheddingAgent_discrete_action.py", line 125, in
    train(total_steps, lr, env, model_path, final_saved_model,saved_model_dir)
    File "C:/Users/0000011369979/Documents/Work/FY2020_2/kochi/RLGC/RLGC_env/RLGC/examples/IEEE39_load_shedding/trainIEEE39LoadSheddingAgent_discrete_action.py", line 90, in train
    load_path=model_path
    TypeError: learn() got an unexpected keyword argument 'network'

I finally gave up. It seems there are a lot of issues regarding Paths. Moreover, I would like to know a simpler way of testing the author's code.

Mismatch between Function names in python and Java

Hi,
There seems to be a name mismatch between the functions defined in the "IpssPyGateway.java" file and the functions called from the python files as methods of the "ipss_app" object. A few examples

  1. ipss_app.Reset() is called in a few python files while the Java file has the method "reset"
  2. ipss_app.getEnvironmentObversations() is called in a few python files while the Java file has the method ''getEnvObversations"

After I tried to correct them manually in a few python files for running python trainIEEE39LoadSheddingAgent.py , I got the following error:

py4j.protocol.Py4JJavaError: An error occurred while calling t.initStudyCase.
: java.lang.NullPointerException
	at java.util.Hashtable.put(Unknown Source)
	at org.pnnl.gov.pss_gateway.IpssPyGateway.initObsverationSpace(IpssPyGateway.java:657)
	at org.pnnl.gov.pss_gateway.IpssPyGateway.initStudyCase(IpssPyGateway.java:228)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
	at java.lang.reflect.Method.invoke(Unknown Source)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:214)
	at java.lang.Thread.run(Unknown Source)

Similarly, when I am trying to run python trainKundur2areaGenBrakingAgent.py, the following error is raised:

py4j.protocol.Py4JError: An error occurred while calling t.reset. Trace:
py4j.Py4JException: Method reset([class java.lang.Integer, class java.lang.Integer, class java.lang.Double, class java.lang.Double]) does not exist
	at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
	at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326)
	at py4j.Gateway.invoke(Gateway.java:274)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:214)
	at java.lang.Thread.run(Unknown Source)



Process finished with exit code 1

I am not sure how to debug/rectify these errors to as it seems to be deeper in Java and I am not able to debug it from python.

Nikita Tomin

Dear colleagues! Thank you! You have developed a very interesting tool. We would like to compare a performance of your DRL-based dynamic brake with our dynamic brake model based on the sub-Grammians method.

However, when I started RLGC tool, I met with a strange problem. The first time I installed your tool on my home laptop (Windows 10, Python 3.7), fully following your instructions. And everything worked perfectly. I started training the model of a dynamic brake, training steps started and there were no warnings and errors. Then I decided to install and run your tool on my working computer (Ubuntu, Python 3.7), which has GPUs and is a powerful workstation. The installation was successful, but when I started the Kundur scheme model training, the following warning occurred:

`Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:61)

Caused by: py4j.Py4JNetworkException
at py4j.GatewayServer.startSocket(GatewayServer.java:788)
at py4j.GatewayServer.start(GatewayServer.java:763)
at py4j.GatewayServer.start(GatewayServer.java:746)
at org.pnnl.gov.pss_gateway.IpssPyGateway.main(IpssPyGateway.java:1143)
... 5 more

Caused by: java.net.BindException: Address already in use: JVM_Bind
at java.net.DualStackPlainSocketImpl.bind0(Native Method)
at java.net.DualStackPlainSocketImpl.socketBind(Unknown Source)
at java.net.AbstractPlainSocketImpl.bind(Unknown Source)
at java.net.PlainSocketImpl.bind(Unknown Source)
at java.net.ServerSocket.bind(Unknown Source)
at py4j.GatewayServer.startSocket(GatewayServer.java:786)
... 8 more
`

In this case, the training process either abruptly ends (there may be one iteration), or does not start at all and an error appears:

py4j.protocol.Py4JJavaError: An error occurred while calling t.initStudyCase.

When I returned home, I met the same java warnings began to appear on my laptop when I ran code python trainKundur2areaGenBrakingAgent.py. This is very strange considering that earlier everything worked well on a home laptop. However, other models trainIEEE39LoadSheddingAgent _ *.py work fine.

I understand that is something related to your py4j.protocol. However, I'm a loser in java and I can't understand why such happened.

About the connection with java server

Hi, I got an error about the java server when I ran ieee39_3actions3levelsresultcheck_linux.py. I am not familiar with py4j. Hope that you can give me some help. Thank you.

(RL_Challenge) hlhsu@hlhsu-Vostro-5481:~/Documents/RLGC/src/py$ python ieee39_3actions3levelsresultcheck_linux.py
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/hlhsu/anaconda3/envs/RL_Challenge/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "/home/hlhsu/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 977, in _get_connection
connection = self.deque.pop()
IndexError: pop from an empty deque

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/hlhsu/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1115, in start
self.socket.connect((self.address, self.port))
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "ieee39_3actions3levelsresultcheck_linux.py", line 38, in
case_files_array = gateway.new_array(gateway.jvm.String, 2)
File "/home/hlhsu/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1695, in getattr
"\n" + proto.END_COMMAND_PART)
File "/home/hlhsu/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1031, in send_command
connection = self._get_connection()
File "/home/hlhsu/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 979, in _get_connection
connection = self._create_connection()
File "/home/hlhsu/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 985, in _create_connection
connection.start()
File "/home/hlhsu/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1127, in start
raise Py4JNetworkError(msg, e)
py4j.protocol.Py4JNetworkError: An error occurred while trying to connect to the Java server (127.0.0.1:25006)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.