Coder Social home page Coder Social logo

navrep's Issues

AttributeError: module 'types' has no attribute 'CellType'

Hi,
I want to test out this environment with the pretrained models in your repository.
After I run "python -m navrep.scripts.cross_test_navreptrain_in_ianenv --backend VAE_LSTM --encoding V_ONLY --render", I get this:

2022-11-11 16:58:51.825066: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2022-11-11 16:58:51.835343: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2112000000 Hz 2022-11-11 16:58:51.837113: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x83e3410 executing computations on platform Host. Devices: 2022-11-11 16:58:51.837227: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined> 2022-11-11 16:58:51.931099: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile. 0%| | 0/3 [00:02<?, ?it/s] Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/tutu/navrepvenv/lib/python3.6/site-packages/navrep/scripts/cross_test_navreptrain_in_ianenv.py", line 101, in <module> c_model = PPO2.load(c_model_path) File "/home/tutu/navrepvenv/lib/python3.6/site-packages/stable_baselines/common/base_class.py", line 936, in load data, params = cls._load_from_file(load_path, custom_objects=custom_objects) File "/home/tutu/navrepvenv/lib/python3.6/site-packages/stable_baselines/common/base_class.py", line 666, in _load_from_file data = json_to_data(json_data, custom_objects=custom_objects) File "/home/tutu/navrepvenv/lib/python3.6/site-packages/stable_baselines/common/save_util.py", line 120, in json_to_data base64.b64decode(serialization.encode()) File "/home/tutu/navrepvenv/lib/python3.6/site-packages/cloudpickle/cloudpickle_fast.py", line 384, in <module> class CloudPickler(Pickler): File "/home/tutu/navrepvenv/lib/python3.6/site-packages/cloudpickle/cloudpickle_fast.py", line 406, in CloudPickler dispatch[types.CellType] = _cell_reduce **AttributeError: module 'types' has no attribute 'CellType'**

Is there an error in the version of my installation? I hope to get your help. Thank you~~

SOADRL code

@danieldugas @ethzasl-jenkins How are you?
I read previous paper which is called SOADRL.
In your library ,soadrl-crowdnav (ver: 0.0.3), it has a fragment code about SOADRL algorithm.
Might i get that full code?

test in ianenv

Hi, thanks for the code. I think your work is interesting and I am following your papers (soadrl, navrep and navdreams). There seems to be no checkpoint file for soadrl (where can I download this: soadrl/Final_models/angular_map_full_FOV). Could you please offer it to me for testing?

Thank you and wait for your reply~

Issues when trying out test environment with navrep

Hi, so I've tried installing navrep and testing out this environment with the pretrained models in your repository- I have a couple issues that I was wondering about?
I've tried first updating any dependencies and I don't have permission to clone anything (e.g. deep_social_planner, etc).

Also, I run intro a strange issue where "dynamic module does not define module export function (PyInit__tf2) when trying out the test environment with navrep. It appears to be an issue due to the installed ros version being only compatible with python2.7?

Please let me know of your thoughts. Thanks

Bug in the simulation

There is a bug that happens almost always in any randomly generated map which is that the Lidar sensor detect that there is a wall but the robot can pass through the wall easily.
In other word the hit-box of the Lidar arrays is different from the the hit-box of the wall which leads to defected data and results.
The problem exist in the original code of the repository and no one noticed it even though it exist in almost every map generated (maybe because everyone running it in Reinforcement Learning models in headless mode so no one ever inspected anything manually).
I discovered this problem because i made an algorithm to navigate through the environment using path planning and re-planning using A-Star and some times the algorithm find a path through the wall and the robot passes through the wall without colliding (meaning the episode does not reset).
I usually saves the episode as a video to inspect the behavior of the algorithm (to improve it) but many times i encountered this issue and then discovered that this bug exist in almost all episodes.
And here is a link to some episodes where the bug happened.
The link also include some useful statistics if someone want to replicate the episodes.
https://drive.google.com/drive/folders/1osTR1u2T_FcAcHtRVsES1Yb1jOXeTIcG?usp=sharing
These are just an example there are many more.
In order to debug the problem i rendered the collision hit-box and this is what i found.
The dark area is the collision hit-box and the lines is the Lidar hit-box.

bugged

As you can see almost all the collision hit-boxes does not match the Lidar hit-box however some more than others (especially the ones that are close to the edges of the map)
After a lot of debugging i found out that when creating the obstruction it assigns the collision hit-box and Lidar hit-box.
Then when merging them with the map it checks if a part of the obstruction is outside the map so that it cuts the part that is outside the map from the obstruction without updating the new vertices of the Lidar hit-box which leads to the bug.
The solution to is bug is the following:
In navrepvenv\lib\python3.6\site-packages\crowd_sim\envs\crowed_sim.py
In the method generate_static_map_input()
In the last part of the method replace the following for loop:

        for obstacle_num, obstacle in enumerate(obstacles):
            if obstacle.location_x > obstacle.dim[0] / 2.0 and \
                obstacle.location_x < grid_size - obstacle.dim[0] / 2.0 and \
                obstacle.location_y > obstacle.dim[1] / 2.0 and \
                obstacle.location_y < grid_size - obstacle.dim[1] / 2.0:
                
                start_idx_x = int(
                    round(
                        obstacle.location_x -
                        obstacle.dim[0] /
                        2.0))
                start_idx_y = int(
                    round(
                        obstacle.location_y -
                        obstacle.dim[1] /
                        2.0))
                self.map[start_idx_x:start_idx_x +
                         obstacle.dim[0], start_idx_y:start_idx_y +
                         obstacle.dim[1]] = np.minimum(self.map[start_idx_x:start_idx_x +
                                                                obstacle.dim[0], start_idx_y:start_idx_y +
                                                                obstacle.dim[1]], obstacle.patch)
            else:
                x_test = []
                y_test = []
                for idx_x in range(obstacle.dim[0]):
                    for idx_y in range(obstacle.dim[1]):
                        shifted_idx_x = idx_x - obstacle.dim[0] / 2.0
                        shifted_idx_y = idx_y - obstacle.dim[1] / 2.0
                        submap_x = int(
                            round(
                                obstacle.location_x +
                                shifted_idx_x))
                        submap_y = int(
                            round(
                                obstacle.location_y +
                                shifted_idx_y))
                        if submap_x > 0 and submap_x < grid_size and submap_y > 0 and submap_y < grid_size:
                            #self.test_obs.append([submap_x, submap_y])
                            x_test.append(submap_x)
                            y_test.append(submap_y)
                            self.map[submap_x,
                                     submap_y] = obstacle.patch[idx_x, idx_y]

with the following updated one (where we update the vertices of the Lidar hit-box)

        self.test_obs = []                                                       
        for obstacle_num, obstacle in enumerate(obstacles):
            if obstacle.location_x > obstacle.dim[0] / 2.0 and \
                obstacle.location_x < grid_size - obstacle.dim[0] / 2.0 and \
                obstacle.location_y > obstacle.dim[1] / 2.0 and \
                obstacle.location_y < grid_size - obstacle.dim[1] / 2.0:
                
                start_idx_x = int(
                    round(
                        obstacle.location_x -
                        obstacle.dim[0] /
                        2.0))
                start_idx_y = int(
                    round(
                        obstacle.location_y -
                        obstacle.dim[1] /
                        2.0))
                self.map[start_idx_x:start_idx_x +
                         obstacle.dim[0], start_idx_y:start_idx_y +
                         obstacle.dim[1]] = np.minimum(self.map[start_idx_x:start_idx_x +
                                                                obstacle.dim[0], start_idx_y:start_idx_y +
                                                                obstacle.dim[1]], obstacle.patch)
                # Here we save the vertices
                self.test_obs.append([range(start_idx_x, start_idx_x + obstacle.dim[0]), range(start_idx_y, start_idx_y + obstacle.dim[1])])

            else:
                x_test = []
                y_test = []
                for idx_x in range(obstacle.dim[0]):
                    for idx_y in range(obstacle.dim[1]):
                        shifted_idx_x = idx_x - obstacle.dim[0] / 2.0
                        shifted_idx_y = idx_y - obstacle.dim[1] / 2.0
                        submap_x = int(
                            round(
                                obstacle.location_x +
                                shifted_idx_x))
                        submap_y = int(
                            round(
                                obstacle.location_y +
                                shifted_idx_y))
                        if submap_x > 0 and submap_x < grid_size and submap_y > 0 and submap_y < grid_size:
                            #self.test_obs.append([submap_x, submap_y])
                            x_test.append(submap_x)
                            y_test.append(submap_y)
                            self.map[submap_x,
                                     submap_y] = obstacle.patch[idx_x, idx_y]
                # Here we save the vertices
                self.test_obs.append([x_test, y_test])
            # Here where we update the vertices 
            obs = self.test_obs[obstacle_num].copy()
            self.obstacle_vertices[obstacle_num] = [((obs[0][-1] - grid_size/2)/10, (obs[1][0] - grid_size/2)/10),
                                                    ((obs[0][0] - grid_size/2)/10, (obs[1][0] - grid_size/2)/10),
                                                    ((obs[0][0] - grid_size/2)/10, (obs[1][-1] - grid_size/2)/10),
                                                    ((obs[0][-1] - grid_size/2)/10, (obs[1][-1] - grid_size/2)/10)]

And This is the result for the same episode shown above after fixing the problem. As we can see the vertices of the Lidar hit-box is updated to match the collision hit-box.

fixed

(Additional Info)
If you want to render the hit-boxes as in the images above do the following.
After fixing the problem we have an attribute of the soadrl_sim self.test_obs
In navrepvenv\lib\python3.6\site-packages\navrep\envs\navreptrainenv.py
In the render() method add this (somewhere in the middle)

def render()
    ...
 # hitbox
                for obs in self.soadrl_sim.test_obs:
                    gl.glBegin(gl.GL_QUADS)
                    gl.glColor4f(0, 0, 0, 0.3)
                    gl.glVertex3f((obs[0][-1] - self.grid_size/2)/10, (obs[1][0] - self.grid_size/2)/10, 0)
                    gl.glVertex3f((obs[0][0] - self.grid_size/2)/10, (obs[1][0] - self.grid_size/2)/10, 0)
                    gl.glVertex3f((obs[0][0] - self.grid_size/2)/10, (obs[1][-1] - self.grid_size/2)/10, 0)
                    gl.glVertex3f((obs[0][-1] - self.grid_size/2)/10, (obs[1][-1] - self.grid_size/2)/10, 0)
                    gl.glEnd()
    ...

If you want to save a video of the simulation you need to do the following:

In navrepvenv\lib\python3.6\site-packages\navrep\envs\navreptrainenv.py
In the reset() method add this at the end (just before the return)

def reset():
    ...
    self.video = []
    return ...

And in the the last part of the render method add this

def render:
    ...
frame = np.asarray(pyglet.image.get_buffer_manager().get_color_buffer().get_image_data().get_data()).reshape((WINDOW_W, WINDOW_H, 4))[::-1]
                #Image.fromarray(frame, 'RGBA').save("finally.png")
                self.video.append(frame)
   return ...

Now that we have the environment attribute self.video contains all the frames but it needs to be exported.
Here is the function that exports it to mp4 format.
Just put it somewhere and give it the environment attribute self.video and environment fps (so that the video will be real time fps=1/self._get_dt()) and the directory where the video will be saved.

def make_video(frames_array, fps, save_file):
        import cv2
        print(f"saving video to {save_file} ...")
        hieght = frames_array[0].shape[0]
        width = frames_array[0].shape[1]
        channel = frames_array[0].shape[2]
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        video = cv2.VideoWriter(save_file, fourcc, float(fps), (width, hieght))
        for frame in frames_array:
            video.write(frame[:, :, 2::-1])
        video.release()
        print("done saving video")

And I hope this was helpful.
Best Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.