Coder Social home page Coder Social logo

aqeelanwar / pedra Goto Github PK

View Code? Open in Web Editor NEW
259.0 259.0 58.0 21.2 MB

Programmable Engine for Drone Reinforcement Learning Applications

License: MIT License

Python 100.00%
airsim drone programmable-engine python reinforcement-learning unreal-engine

pedra's Introduction

Aqeel's GitHub Banner

Visits Badge LinkedIn Badge Medium Badge

Hello,

I am Aqeel, a PhD candidate at Georgia Institute of Technology, working towards energy-effecient machine learning systems design.

pedra's People

Contributors

aqeelanwar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pedra's Issues

What is the physical coordinates API?

Nice works!

PEDRA is a really good program and I am trying to add some new reward setting to DQN, the thing I want to know is how to read the physical coordinates for every drone in the environment. First I used simGetVehiclePose and found that it is the PEDRA coordinates, in the game we can press P to get the physical coordinates information, but how can we get these in code?

Any help would be greatly appreciated!

Simulation error

hello,sir
I use cuda10 ,tensorflow 1.14.0
and when i execute python main.py
I got error

------------------------------- Simulation begins -------------------------------
------------- Error -------------
<class 'UnboundLocalError'> DeepREINFORCE.py 172
local variable 'depth' referenced before assignment
Hit r and then backspace to start from this point

and then I hit r then backspace
------------------------------------- Drone -------------------------------------
Connected!
Client Ver:1 (Min Req: 1), Server Ver:1 (Min Req: 1)

------------- Error -------------
<class 'UnboundLocalError'> DeepREINFORCE.py 172
local variable 'depth' referenced before assignment
Hit r and then backspace to start from this point

please help me sir

Unable to run main.py

Hi sir, i'm getting this error when attempting to run the main.py file. Hoping u can help me out with this.
error

Question about module import error

Hi sir,
First of all, thank you for your marvelous works!

When I executed main.py program, I encountered an error message:

`Connected!
Client Ver:1 (Min Req: 1), Server Ver:1 (Min Req: 1)

C:\PEDRA\util\transformations.py:1914: UserWarning: failed to import module _transformations
warnings.warn('failed to import module %s' % name)

----------------- global -----------------
Initializing DQN
Initializing Target`

But the training started successfully and the program was working well after that.
I was wondering if this error affects the training progress or not.
Can you check this problem for me?

Thank you and best regards.

setting the velocity to 0 in each step

What is the reason for setting the velocity to 0 after each step in the agent?

self.client.moveByVelocityAsync(vx=vx, vy=vy, vz=vz, duration=1, drivetrain=airsim.DrivetrainType.MaxDegreeOfFreedom, yaw_mode=airsim.YawMode(is_rate=False, yaw_or_rate=180 * (alpha + psi) / np.pi), vehicle_name=self.vehicle_name) time.sleep(0.07) self.client.moveByVelocityAsync(vx=0, vy=0, vz=0, duration=1, vehicle_name=self.vehicle_name)

Code

I had a few questions regarding the code and hope you could clear them

  1. Why are there so many network architectures in network.py
  2. Will the performance be altered with change in architecture
  3. And where can I find more information regrading how the agent is rewarded and it is leading to changes in weights

Value Error in Courtyard Map

I'm getting the following error in outdoor courtyard map

<class 'ValueError'> DeepQLearning.py 172
cannot reshape array of size 1 into shape (0,0)
Hit r and then backspace to start from this point

I've to reconnect and resume training sometimes

Error when running main.py

Dear Sir,
when I tried to run main.py, I get this issue.

(envPEDRA) E:\Major Project\PycharmProjects\PEDRA>python main.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
------------------------------- configs/config.cfg -------------------------------
[general_params]
run_name: Tello_indoor
env_type: Indoor
env_name: indoor_long
mode: infer
SimMode: Multirotor
drone: DJIMavic
ClockSpeed: 20
algorithm: DeepQLearning
ip_address: 127.0.0.5
[camera_params]
width: 320
height: 180
fov_degrees: 80

---------------------------------- Environment ----------------------------------
Successfully loaded environment: indoor_long
E:\Major Project\PycharmProjects\PEDRA\util\transformations.py:1914: UserWarning: failed to import module _transformations
warnings.warn('failed to import module %s' % name)
--------------------------- configs/DeepQLearning.cfg ---------------------------
[simulation_params]
custom_load: False
custom_load_path: models/trained/Indoor/indoor_long/Imagenet/e2e/drone0/drone0
[RL_params]
input_size: 103
num_actions: 25
train_type: e2e
wait_before_train: 5000
max_iters: 150000
buffer_len: 10000
batch_size: 32
epsilon_saturation: 100000
crash_thresh: 1.3
Q_clip: True
train_interval: 2
update_target_interval: 8000
gamma: 0.99
dropout_rate: 0.1
learning_rate: 2e-06
switch_env_steps: 2000000000
epsilon_model: exponential

------------------------------------- Drone -------------------------------------
Connected!
Client Ver:1 (Min Req: 1), Server Ver:1 (Min Req: 1)

Initializing DQN
Traceback (most recent call last):
File "main.py", line 99, in
eval(name)
File "", line 1, in
File "E:\Major Project\PycharmProjects\PEDRA\algorithms\DeepQLearning.py", line 64, in DeepQLearning
p_z,f_z, fig_z, ax_z, line_z, fig_nav, ax_nav, nav = initialize_infer(env_cfg=env_cfg, client=client, env_folder=env_folder)
File "E:\Major Project\PycharmProjects\PEDRA\aux_functions.py", line 86, in initialize_infer
f_z = env_cfg.floor_z/100
TypeError: unsupported operand type(s) for /: 'DotMap' and 'int'

Unable to run main.py(D-PEDRA)

Hi sir, i'm getting this error when attempting to run the main.py file. It stopped and couldn't open UE file. Hoping u can help me out with this.
image

Output graphs

Hi,
How do I access the output graphs like demonstrated here?

Capture

CUDA dll import error

Hi,
When i try and run the python main.py file, i get this error under the environment logs. I can't install CUDA 8.0 as it's too old for my Nvidia driver and CUDA 11.0 (Current version) leads to the same error.

Any advice on how it can be fixed? Thanks a lot in advance.
Capture

msgpackrpc.error

Hi sir, I'm getting this error when running the main.py.

Traceback (most recent call last):
  File "C:/SC/AirSim SC/PEDRA/main.py", line 99, in <module>
    eval(name)
  File "<string>", line 1, in <module>
  File "C:\SC\AirSim SC\PEDRA\algorithms\DeepQLearning.py", line 21, in DeepQLearning
    client, old_posit, initZ = connect_drone(ip_address=cfg.ip_address, phase=cfg.mode, num_agents=cfg.num_agents)
  File "C:\SC\AirSim SC\PEDRA\aux_functions.py", line 300, in connect_drone
    client.enableApiControl(True, name_agent)
  File "C:\tools\IDEs\Anaconda3\lib\site-packages\airsim\client.py", line 40, in enableApiControl
    return self.client.call('enableApiControl', is_enabled, vehicle_name)
  File "C:\tools\IDEs\Anaconda3\lib\site-packages\msgpackrpc\session.py", line 41, in call
    return self.send_request(method, args).get()
  File "C:\tools\IDEs\Anaconda3\lib\site-packages\msgpackrpc\future.py", line 45, in get
    raise error.RPCError(self._error)
msgpackrpc.error.RPCError: rpclib: function 'enableApiControl' (called with 2 arg(s)) threw an exception. The exception contained this information: Vehicle API for 'drone0' is not available. This could either because this is simulation-only API or this vehicle does not exist.

Hoping u can help me out with this.

'PedraAgent' object has no attribute 'save_network'

ater 8000 iter. the training stops with message
global - Switching Target Network
------------- Error -------------
<class 'AttributeError'> DeepQLearning.py 333
'PedraAgent' object has no attribute 'save_network'

unable to see screens during simulation

I do not see the simulation screen with the drone. This includes the depth map and the segmentation map.

However, I can see the floorplan and the front facing camera.

why do we retrieve episodes like this?

In DeepQLearning.py, when we switch levels, we save current level episodes with this line:
epi_env_array[name_agent][level[name_agent]]= episode[name_agent]
Then we initialize level and episode with these two lines:

level[name_agent] = (level[name_agent] + 1) % len(reset_array[name_agent])
episode[name_agent] = epi_env_array[name_agent][int(level[name_agent] / 3)]

As the level[name_agent] has already new correct value (due to % len(reset_array[name_agent]),
my question is why not replace :
episode[name_agent] = epi_env_array[name_agent][int(level[name_agent] / 3)]
with :
episode[name_agent] = epi_env_array[name_agent][level[name_agent])]
I'm guessing / 3 is due to the fact you have 3 initial positions in unreal_envs/initial_positions.py

Getting real time output

I need to use realtime images from the unreal environments and use them as input for my model to check if the current view is proper for moving forward or not and if it's not good my program calculates the angle and commands the drone to rotate. I want to do this at move_around mode.
How can I use get_MonocularImageRGB() to get image? I don't know what the arguments are for this function and what the output is. I don't have any idea what to do.
Please help me with what actions should I perform.

Average Return

Hi.
I tried to use this open source project to compare the performances of different algorithms. According to the definition of Return in the codes, it should keep increasing? However, I ran the codes(using DQN, REINFORCE and PPO) for multiple times and plotted drone0/Return in tensorboard log, the average return seemed to converge.
I changed the network structure from C3F2 to AlexNetDuel for DQN algorithm. The return figure seemed to match the paper you provide. But REINFORCE and PPO is still not be able to navigate the drone for a long safe distance. Or I missed something...
Could you please give me some guidance on this issue?
Thanks for your work. It helps a lot.

Error when running main.py

When I set custom_load : True
this error occured. I tried googling to find a solution, but i could not solve it.
but when I change True > False
this error not occured. I want to load custom weights to see them on tensorboard.
help me please..

C:\Users\user\DRLwithTL-master\util\transformations.py:1914: UserWarning: failed to import module _transformations
warnings.warn('failed to import module %s' % name)
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html

Successfully loaded environment: indoor_techno
------------------------------ Drone ------------------------------
Connected!
Client Ver:1 (Min Req: 1), Server Ver:1 (Min Req: 1)

------------------------------ DQN ------------------------------

Loading weights from: models/trained/Indoor/indoor_cloud/Imagenet/e2e/e2e
Fatal Python error: (pygame parachute) Segmentation Fault
TypeError: init() missing 3 required positional arguments: 'node_def', 'op', and 'message'

Thread 0x00000e40 (most recent call first):
File "C:\Users\user\Anaconda3\envs\myenv\lib\threading.py", line 296 in wait
File "C:\Users\user\Anaconda3\envs\myenv\lib\queue.py", line 170 in get
File "C:\Users\user\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\summary\writer\event_file_writer.py", line 159 in run
File "C:\Users\user\Anaconda3\envs\myenv\lib\threading.py", line 917 in _bootstrap_inner
File "C:\Users\user\Anaconda3\envs\myenv\lib\threading.py", line 885 in _bootstrap

Thread 0x00002e94 (most recent call first):
File "C:\Users\user\Anaconda3\envs\myenv\lib\threading.py", line 296 in wait
File "C:\Users\user\Anaconda3\envs\myenv\lib\queue.py", line 170 in get
File "C:\Users\user\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\summary\writer\event_file_writer.py", line 159 in run
File "C:\Users\user\Anaconda3\envs\myenv\lib\threading.py", line 917 in _bootstrap_inner
File "C:\Users\user\Anaconda3\envs\myenv\lib\threading.py", line 885 in _bootstrap

Current thread 0x00000620 (most recent call first):
File "C:\Users\user\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 384 in get_matching_files_v2
File "C:\Users\user\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 363 in get_matching_files
File "C:\Users\user\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\checkpoint_management.py", line 366 in checkpoint_exists_internal
File "C:\Users\user\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\training\saver.py", line 1280 in restore
File "C:\Users\user\DRLwithTL-master\network\agent.py", line 458 in load_network
File "C:\Users\user\DRLwithTL-master\network\agent.py", line 59 in init
File "main.py", line 40 in

Doc update

Hello,

I tried installing your repository and the README should explicitly mention to use python 3.6.
I was running python 3.8.3 and I could not install the required packages from requirements_cpu.txt nor requirements.gpu.txt

Time Out Error

Hello @aqeelanwar

Thank you for your great project.
I run this project in move_around mode without any error but when I want to run this project in infer mode. when I try to run it I got this error:

"""
Traceback (most recent call last):
File "main.py", line 104, in
eval(name)
File "", line 1, in
File "E:\Thesis\PEDRA\algorithms\DeepQLearning.py", line 28, in DeepQLearning
client, old_posit, initZ = connect_drone(ip_address=cfg.ip_address, phase=cfg.mode, num_agents=cfg.num_agents)
File "E:\Thesis\PEDRA\aux_functions.py", line 297, in connect_drone
client.confirmConnection()
File "C:\Users\hamed\anaconda3\envs\Thesis\lib\site-packages\airsim\client.py", line 54, in confirmConnection
if self.ping():
File "C:\Users\hamed\anaconda3\envs\Thesis\lib\site-packages\airsim\client.py", line 24, in ping
return self.client.call('ping')
File "C:\Users\hamed\anaconda3\envs\Thesis\lib\site-packages\msgpackrpc\session.py", line 41, in call
return self.send_request(method, args).get()
File "C:\Users\hamed\anaconda3\envs\Thesis\lib\site-packages\msgpackrpc\future.py", line 43, in get
raise self._error
msgpackrpc.error.TimeoutError: Request timed out

""""
can you help me please.
Thank you

Cannot Execute Main.py

Note - nvidia_smi wasn't recognised so i had to import pynvml.smi to use nvidia_smi but it now shows 'nvmlInit attribute is missing

C:\PEDRA>python main.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
------------------------------- configs/config.cfg -------------------------------
[general_params]
run_name: Tello_indoor
env_type: Indoor
env_name: indoor_long
mode: train
SimMode: Multirotor
drone: DJIMavic
ClockSpeed: 20
algorithm: DeepQLearning
ip_address: 127.0.0.5
num_agents: 3
[camera_params]
width: 320
height: 180
fov_degrees: 80
C:\PEDRA\util\transformations.py:1914: UserWarning: failed to import module _transformations
warnings.warn('failed to import module %s' % name)
--------------------------- configs/DeepQLearning.cfg ---------------------------
[simulation_params]
load_data: False
load_data_path: DeepNet/models/Tello_indoor/VanLeer/
distributed_algo: LocalLearningLocalUpdate
[RL_params]
input_size: 103
num_actions: 25
train_type: e2e
wait_before_train: 5000
max_iters: 150000
buffer_len: 10000
batch_size: 32
epsilon_saturation: 100000
crash_thresh: 1.3
Q_clip: True
train_interval: 2
update_target_interval: 8000
gamma: 0.99
dropout_rate: 0.1
learning_rate: 2e-06
switch_env_steps: 2000000000
epsilon_model: exponential
custom_load: False
custom_load_path: models/trained/Indoor/indoor_long/Imagenet/e2e/drone0/drone0
communication_interval: 100
average_connectivity: 2

---------------------------------- Environment ----------------------------------
The system cannot find the path specified.
Successfully loaded environment: indoor_long

------------------------------------- Drone -------------------------------------
Connected!
Client Ver:1 (Min Req: 1), Server Ver:1 (Min Req: 1)

Traceback (most recent call last):
File "main.py", line 87, in
eval(name)
File "", line 1, in
File "C:\PEDRA\algorithms\DeepQLearning.py", line 37, in DeepQLearning
nvidia_smi.nvmlInit()
AttributeError: type object 'nvidia_smi' has no attribute 'nvmlInit'

No floor plan

Hi
Thanks for sharing your code. Unfortunately there's a problem with indoor long floor plan as I can't download it. It's necessary for infer mode.
with regards

Reading custom weights

Hi, my training crashed after 30000 iterations or so and I saved the weights before quitting the program. Next time when I read the weights using the custom_path, it looks like the drone is starting all over again, as in the drone doesn't reach distances it did before I saved the weights. I just want to know if I'm doing it the right way. My folder structure is like this C:\Users\Sumukha\Desktop\Flight\DRL\PEDRA\models\trained\Indoor\indoor_long\Imagenet\e2e\drone0 and under drone0 I have drone0_user.data, index and meta.
In my DeepQLearning.cfg, I have given the path as models/trained/Indoor/indoor_long/Imagenet/e2e/drone0/drone0_user. This will read all the 3 files right?
Also I was wondering if there was any difference between giving the custom_load in config.cfg and DeepQLearning.cfg.

Is there a way you would recommend through which I'd know if the weights were actually loaded from custom path? In the console, it does display that it read the weights but the movement of the drone doesn't support it

small spelling mistake in readme file

hi, loving this project!

there is a small spelling mistake in the readme file, it says to make a folder called unreal_env but in the code (.gitignore, inital_positions.py and aux_functions.py) the folder is called unreal_envs.

Is it possible to use another Airsim environment in PEDRA?

Hi Sir,

Thank you for your work!

I want to do some reinforce learning research in outdoor environments and I was wondering if it is possible to add a new Airsim environment (for example: City Environment) in PEDRA?

Thank you and best regards.

main.py run error

Hi,
When I run the main.py script after editing the config files, I get this error. Are you able to advise on how to fix it?
Capture

Kind regards,
Kevin Fuller

np.reshape(0, 0, 3) error - from airsim

hi again!

there is an error in airsim where it sometimes returns an empty image with length of 1. It would happed in the get_depth() of get_state() functions and they would return an error:

ValueError: cannot reshape array of size 1 into shape (0,0,3)

I have written a simple hacky workaround, do you want me to make a PR or give you the code here?

Do you also get it from time to time? It looks like there isn't a solution for it yet, see related issues on airsim repo:
microsoft/AirSim#1840
microsoft/AirSim#1710
microsoft/AirSim#1755

Is there any way to zoom in the drones icon on the floorplan?

The drone icon on the floorplan is too small to visualize. I guess it can be adjusted in UE4 project? Also, the function that switching the camera between different drones cannot be implemented without UE4 project.

Another question is, is that possible to infer Multiple Drones in one environment with the same weights?

It is an exciting RL engine. Thank you for your work!

Iterations in indoor_long

How many iterations does it take to completely train the drone to autonomously navigate the indoor long map. At 200k iterations it is currently navigating the first corridor and crashes on the left edge of the second corridor

Error when running main.py

when i tried to run main.py with a downloaded environment, i get this issue.

(airsim) C:\Users\Cagalli\Documents\Unreal Projects\MyProject3\Build\PEDRA>python main.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
------------------------------- configs/config.cfg -------------------------------
[general_params]
run_name: Tello_indoor
custom_load: True
custom_load_path: models/trained/Indoor/indoor_cloud/Imagenet/e2e/e2e
env_type: Indoor
env_name: indoor_techno
phase: infer
SimMode: Multirotor
drone: DJIMavic
ClockSpeed: 3
algorithm: DeepQLearning
ip_address: 127.0.0.2
[camera_params]
width: 640
height: 360
fov_degrees: 80
C:\Users\Cagalli\Documents\Unreal Projects\MyProject3\Build\PEDRA\util\transformations.py:1914: UserWarning: failed to import module _transformations
warnings.warn('failed to import module %s' % name)
--------------------------- configs/DeepQLearning.cfg ---------------------------
[simulation_params]
load_data: False
load_data_path: DeepNet/models/Tello_indoor/VanLeer/
[RL_params]
input_size: 227
num_actions: 25
train_type: e2e
wait_before_train: 10000
max_iters: 150000
buffer_len: 30000
batch_size: 32
epsilon_saturation: 150000
crash_thresh: 1.3
Q_clip: True
train_interval: 3
update_target_interval: 80000
gamma: 0.99
dropout_rate: 0.1
learning_rate: 1e-06
switch_env_steps: 2000
epsilon_model: exponential
Traceback (most recent call last):
File "main.py", line 13, in
eval(name)
File "", line 1, in
File "C:\Users\Cagalli\Documents\Unreal Projects\MyProject3\Build\PEDRA\algorithms\DeepQLearning.py", line 63, in DeepQLearning
env_process, env_folder = start_environment(env_name=cfg.env_name)
File "C:\Users\Cagalli\Documents\Unreal Projects\MyProject3\Build\PEDRA\aux_functions.py", line 40, in start_environment
env_process = subprocess.Popen(path)
File "C:\Users\Cagalli\Anaconda3\envs\airsim\lib\subprocess.py", line 707, in init
restore_signals, start_new_session)
File "C:\Users\Cagalli\Anaconda3\envs\airsim\lib\subprocess.py", line 992, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

msgpackrpc.error.RPCError: rpclib: function 'enableApiControl' (called with 2 arg(s)) threw an exception.

I think that this issue is about num_agent settings. Could you help me to do this code..

image

---------------------------------- Environment ----------------------------------
Successfully loaded environment: indoor_techno

------------------------------------- Drone -------------------------------------
Connected!
Client Ver:1 (Min Req: 1), Server Ver:1 (Min Req: 1)

Traceback (most recent call last):
File "main.py", line 87, in
eval(name)
File "", line 1, in
File "C:\Users\hshsh\OneDrive\Desktop\pedra2\algorithms\DeepQLearning.py", line 26, in DeepQLearning
client, old_posit, initZ = connect_drone(ip_address=cfg.ip_address, phase=cfg.mode, num_agents=cfg.num_agents)
File "C:\Users\hshsh\OneDrive\Desktop\pedra2\aux_functions.py", line 247, in connect_drone
client.enableApiControl(True, name_agent)
File "C:\Users\hshsh\Anaconda3\envs\hs\lib\site-packages\airsim\client.py", line 37, in enableApiControl
return self.client.call('enableApiControl', is_enabled, vehicle_name)
File "C:\Users\hshsh\Anaconda3\envs\hs\lib\site-packages\msgpackrpc\session.py", line 41, in call
return self.send_request(method, args).get()
File "C:\Users\hshsh\Anaconda3\envs\hs\lib\site-packages\msgpackrpc\future.py", line 45, in get
raise error.RPCError(self._error)
msgpackrpc.error.RPCError: rpclib: function 'enableApiControl' (called with 2 arg(s)) threw an exception. The exception contained this information: Vehicle API for 'drone0' is not available. This could either because this is simulation-only API or this vehicle does not exist.

Resuming training using learned weights

Hi, while training my system crashes at times due to unknown reasons so I was wondering if there is any way to reload the weights from previous training sessions which were saved at some point instead of starting all over.

Linux support ?

Hello,
I tried your project and downloaded everything (unreal and some environments).
When I execute the main.py file I have the following error:

OSError: [Errno 8] Exec format error: '...PEDRA/unreal_envs/indoor_long/indoor_long.exe' which make sense because I am on Ubuntu. Does this project support Ubuntu ? If yes could you help me ? Where can I find the compatible environments ?

The drone hits the wall and stuck at training in the loop in the env of indoor_vanleer. Could you help to modify the env or the drone settings?

Thank you for your marvelous works!

I found the issue that the drone hits the wall and stuck at training in the loop in the env of indoor_vanleer.

Could you help to modify the env or the drone settings?
Or could we modify the orientation of the drone in the starting point?

Here is the link of the recording video:
https://youtu.be/E9hx_QkMAPE

Any help is appreciated.

msgpackrpc.error

Hi Aqeel,
I'm getting this error after a few hundred iterations. Hitting r and backspace doesn't seem to resolve it.
I've attached the setting.json file with this.

Cheers
Capture 3

settings.txt

tf.contrib.layers.flatten deprecated in tensorflow2

Hi,

I am facing some issues while trying to make use of PEDRA. I am currently using tensorflow 2.0, and has used the tf_upgrade_v2 script to get PEDRA to run, however, it seems that tf.contrib.layers.flatten has been deprecated, and I've tried finding solutions to resolve it, and most of the solutions online suggested using tf.keras.layers.Flatten instead. I've tried using it and this is the error that I have faced, any help will be deeply appreciated!

------------------------------------- Drone -------------------------------------
Connected!
Client Ver:1 (Min Req: 1), Server Ver:1 (Min Req: 1)

----------------- drone0 -----------------
Initializing DQN
WARNING:tensorflow:From C:\Users\teeko\venv\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:1666: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From C:\Users\teeko\venv\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py:1666: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
Traceback (most recent call last):
File "main.py", line 99, in
eval(name)
File "", line 1, in
File "C:\Users\teeko\ITPGitClone\PEDRA\algorithms\DeepQLearning.py", line 49, in DeepQLearning
agent[name_agent] = PedraAgent(algorithm_cfg, client, name='DQN', vehicle_name=name_agent)
File "C:\Users\teeko\ITPGitClone\PEDRA\network\agent.py", line 29, in init
self.network_model = eval(net_mod)
File "", line 1, in
File "C:\Users\teeko\ITPGitClone\PEDRA\network\network_models.py", line 40, in init
self.model = C3F2(self.X, cfg.num_actions, cfg.train_fc)
File "C:\Users\teeko\ITPGitClone\PEDRA\network\network.py", line 39, in init
self.flat = tf.keras.layers.Flatten(self.conv3)
File "C:\Users\teeko\venv\lib\site-packages\tensorflow\python\keras\layers\core.py", line 631, in init
self.data_format = conv_utils.normalize_data_format(data_format)
File "C:\Users\teeko\venv\lib\site-packages\tensorflow\python\keras\utils\conv_utils.py", line 192, in normalize_data_format
data_format = value.lower()
AttributeError: 'Tensor' object has no attribute 'lower'

Loading pretrained alexnet

I was trying to load AlexNet (I assume?) pretrained on ImageNet, as described in this outdated blog post here. I used imagenet.npy linked there and got "unable to open table" error:
image

Is this feature deprecated or is it a problem on my end? Also, is it even good idea to start with ImageNet pretrained net and if so, which format to use, .npy?

In infer mode, the drone is nonstop flying in a circle. Could you provide the tutorial document in detail for the infer mode?

Thank you for your marvelous works!

I am confused at checkpoint, weight, and custom_load_path in inferencing DeepREINFORCE,
and I cannot find the tutorial document in detail.

This is the setting of config.cfg:
= = =
[general_params]
run_name: Tello_indoor
env_type: Indoor
env_name: indoor_cloud
ip_address: 127.0.0.5
algorithm: DeepREINFORCE
mode: infer

[drone_params]
SimMode: Multirotor
num_agents: 1
drone: DJIMavic
ClockSpeed: 100

[camera_params]
width: 320
height: 240
fov_degrees: 94
= = =

This is the setting of DeepREINFORCE.cfg:
= = =
[simulation_params]
custom_load: True
custom_load_path: models/trained/Indoor/indoor_cloud/Imagenet/e2e/drone0
distributed_algo: GlobalLearningGlobalUpdate-SA

[RL_params]
input_size: 103
num_actions: 25
train_type: e2e
total_episodes: 15000000
batch_size: 32
crash_thresh: 1.3
learning_rate: 1e-4
switch_env_steps: 2000000000
gamma: 0.99

[distributed_RL params]
communication_interval: 100
average_connectivity: 2
= = =

This is the content of checkpoint file:
= = =
model_checkpoint_path: "drone0_15300"
all_model_checkpoint_paths: "drone0_14900"
all_model_checkpoint_paths: "drone0_15000"
all_model_checkpoint_paths: "drone0_15100"
all_model_checkpoint_paths: "drone0_15200"
all_model_checkpoint_paths: "drone0_15300"
= = =

The unreal engine is crashed one time with uncertainty.
The env is restarted and reconnected.

Then, I have paused two times of training to save the weight, and then recover the connection to the unreal engine.
Continue the training.

I do not know why there are many checkpoints?

This is the screenshot of the folder:
https://drive.google.com/file/d/1novWNoKip4CqZdd7SDjkhvKn991vA4iv/view?usp=sharing

After the training processing is interrupted:
https://drive.google.com/file/d/1RrvFIriWNHuTOZwKRLMQP4va8Rup1ayJ/view?usp=sharing

I set 'custom_load_path:' to:
custom_load_path: models/trained/Indoor/indoor_cloud/Imagenet/e2e/drone0

The error messages were shown:
= = =
Fatal Python error: (pygame parachute) Segmentation Fault
TypeError: init() missing 3 required positional arguments: 'node_def', 'op', and 'message'

Thread 0x00006558 (most recent call first):
\miniconda3\envs\p36\lib\threading.py", line 295 in wait
\miniconda3\envs\p36\lib\queue.py", line 164 in get
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\summary\writer\event_file_writer.py", line 159 in run
\miniconda3\envs\p36\lib\threading.py", line 916 in _bootstrap_inner
\miniconda3\envs\p36\lib\threading.py", line 884 in _bootstrap

Thread 0x000068d4 (most recent call first):
\miniconda3\envs\p36\lib\threading.py", line 295 in wait
\miniconda3\envs\p36\lib\queue.py", line 164 in get
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\summary\writer\event_file_writer.py", line 159 in run
\miniconda3\envs\p36\lib\threading.py", line 916 in _bootstrap_inner
\miniconda3\envs\p36\lib\threading.py", line 884 in _bootstrap

Current thread 0x000065d8 (most recent call first):
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\client\session.py", line 1443 in _call_tf_sessionrun
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\client\session.py", line 1350 in _run_fn
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\client\session.py", line 1365 in _do_call
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\client\session.py", line 1359 in _do_run
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\client\session.py", line 1180 in _run
\miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\client\session.py", line 956 in run
miniconda3\envs\p36\lib\site-packages\tensorflow_core\python\training\saver.py", line 1290 in restore
\network\network_models.py", line 361 in load_network
\network\network_models.py", line 244 in init
File "", line 1 in
\network\agent.py", line 29 in init
\algorithms\DeepREINFORCE.py", line 58 in DeepREINFORCE
File "", line 1 in
File "main.py", line 99 in
= = =

While I set 'custom_load_path:' to:
custom_load_path: models/trained/Indoor/indoor_cloud/Imagenet/e2e/drone0/drone0_14900
or
custom_load_path: models/trained/Indoor/indoor_cloud/Imagenet/e2e/drone0/drone0_15000
or
. . .
or
custom_load_path: models/trained/Indoor/indoor_cloud/Imagenet/e2e/drone0/drone0_15300

The drone is nonstop flying in a circle:
https://drive.google.com/file/d/12610RW8uNrlfgC2Xu_s6bYTNkJL0Q68L/view?usp=sharing

Could you provide the tutorial document in detail on how to correctly train/ infer, save the weight, set custom_load_path, ..., and so on?

Any help is appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.