Coder Social home page Coder Social logo

atiss's Introduction

ATISS: Autoregressive Transformers for Indoor Scene Synthesis

Example 1 Example 2 Example 3

This repository contains the code that accompanies our paper ATISS: Autoregressive Transformers for Indoor Scene Synthesis.

You can find detailed usage instructions for training your own models, using our pretrained models as well as performing the interactive tasks described in the paper below.

If you found this work influential or helpful for your research, please consider citing

@Inproceedings{Paschalidou2021NEURIPS,
  author = {Despoina Paschalidou and Amlan Kar and Maria Shugrina and Karsten Kreis and Andreas Geiger and Sanja Fidler},
  title = {ATISS: Autoregressive Transformers for Indoor Scene Synthesis},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2021}
}

Installation & Dependencies

Our codebase has the following dependencies:

For the visualizations, we use simple-3dviz, which is our easy-to-use library for visualizing 3D data using Python and ModernGL and matplotlib for the colormaps. Note that simple-3dviz provides a lightweight and easy-to-use scene viewer using wxpython. If you wish you use our scripts for visualizing the generated scenes, you will need to also install wxpython. Note that for all the renderings in the paper we used NVIDIA's OMNIVERSE.

The simplest way to make sure that you have all dependencies in place is to use conda. You can create a conda environment called atiss using

conda env create -f environment.yaml
conda activate atiss

Next compile the extension modules. You can do this via

python setup.py build_ext --inplace
pip install -e .

Dataset

To evaluate a pretrained model or train a new model from scratch, you need to obtain the 3D-FRONT and the 3D-FUTURE dataset. To download both datasets, please refer to the instructions provided in the dataset's webpage. As soon as you have downloaded the 3D-FRONT and the 3D-FUTURE dataset, you are ready to start the preprocessing. In addition to a preprocessing script (preprocess_data.py), we also provide a very useful script for visualising 3D-FRONT scenes (render_threedfront_scene.py), which you can easily execute by running

python render_threedfront_scene.py SCENE_ID path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images

You can also visualize the walls, the windows as well as objects with textures by setting the corresponding arguments. Apart from only visualizing the scene with scene id SCENE_ID, the render_threedfront_scene.py script also generates a subfolder in the output folder, specified via the path_to_output_dir argument that contains the .obj files as well as the textures of all objects in this scene. Note that examples of the expected scene ids SCENE_ID can be found in the train/test/val split files for the various rooms in the config folder, e.g. MasterBedroom-28057, LivingDiningRoom-4125 etc.

Data Preprocessing

Once you have downloaded the 3D-FRONT and 3D-FUTURE datasets you need to run the preprocess_data.py script in order to prepare the data to be able to train your own models or generate new scenes using previously trained models. To run the preprocessing script simply run

python preprocess_data.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images --dataset_filtering threed_front_bedroom

Note that you can choose the filtering for the different room types (e.g. bedrooms, living rooms, dining rooms, libraries) via the dataset_filtering argument. The path_to_floor_plan_texture_images is the path to a folder containing different floor plan textures that are necessary to render the rooms using a top-down orthographic projection. An example of such a folder can be found in the demo\floor_plan_texture_images folder.

This script starts by parsing all scenes from the 3D-FRONT dataset and then for each scene it generates a subfolder inside the path_to_output_dir that contains the information for all objects in the scene (boxes.npz), the room mask (room_mask.png) and the scene rendered using a top-down orthographic_projection (rendered_scene_256.png). Note that for the case of the living rooms and dining rooms you also need to change the size of the room during rendering to 6.2m from 3.1m, which is the default value, via the --room_side argument.

Morover, you will notice that the preprocess_data.py script takes a significant amount of time to parse all 3D-FRONT scenes. To reduce the waiting time, we cache the parsed scenes and save them to the /tmp/threed_front.pkl file. Therefore, once you parse the 3D-FRONT scenes once you can provide this path in the environment variable PATH_TO_SCENES for the next time you run this script as follows:

PATH_TO_SCENES="/tmp/threed_front.pkl" python preprocess_data.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images --dataset_filtering room_type

Finally, to further reduce the pre-processing time, note that it is possible to run this script in multiple threads, as it automatically checks whether a scene has been preprocessed and if it is it moves forward to the next scene.

How to pickle the 3D-FUTURE dataset

Most of our scripts require to provide a path to a file that contains the parsed ThreedFutureDataset after being pickled. To do this, we provide the pickle_threed_future_dataset.py that does this automatically for you. You can simply run this script as follows:

python pickle_threed_future_dataset.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info --dataset_filtering room_type

Note that by specifying the PATH_TO_SCENES environment variable this script will run significantly faster. Moreover, this step is necessary for all room types containing different objects. For the case of 3D-FRONT this is for the bedrooms and the living/dining rooms, thus you have to run this script twice with different --dataset_filtering options. Please check the help menu for additional details.

Usage

As soon as you have installed all dependencies and have generated the preprocessed data, you can now start training new models from scratch, evaluate our pre-trained models and visualize the generated scenes using one of our pre-trained models. All scripts expect a path to a config file. In the config folder you can find the configuration files for the different room types. Make sure to change the dataset_directory argument to the path where you saved the preprocessed data from before.

Scene Generation

To generate rooms using a previously trained model, we provide the generate_scenes.py script and you can execute it by running

python generate_scenes.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. By default this script randomly selects floor plans from the test set and conditioned on this floor plan it generate different arrangements of objects. Note that if you want to generate a scene conditioned on a specific floor plan, you can select it by providing its scene id via the --scene_id argument. In case you want to run this script headlessly you should set the --without_screen argument. Finally, the path_to_3d_future_pickled_data specifies the path that contains the parsed ThreedFutureDataset after being pickled.

Scene Completion && Object Placement

To perform scene completion, we provide the scene_completion.py script that can be executed by running

python scene_completion.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. For this script make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and conditioned on this partial scene it populates the empty space with objects. However, you can choose a specific room via the --scene_id argument. This script can be also used to perform object placement. Namely starting from a partial scene add an object of a specific object category.

In the output directory, the scene_completion.py script generates two folders for each completion, one that contains the mesh files of the initial partial scene and another one that contains the mesh files of the completed scene.

Object Suggestions

We also provide a script that performs object suggestions based on a user-specified region of acceptable positions. Similar to the previous scripts you can execute by running

python object_suggestion.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. Also for this script, please make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and the user can either choose to remove some objects or keep it unchanged. Subsequently, the user needs to specify the acceptable positions to place an object using 6 comma seperated numbers that define the bounding box of the valid positions. Similar to the previous scripts, it is possible to select a particular scene by choosing specific room via the --scene_id argument.

In the output directory, the object_suggestion.py script generates two folders in each run, one that contains the mesh files of the initial scene and another one that contains the mesh files of the completed scene with the suggested object.

Failure Cases Detection and Correction

We also provide a script that performs failure cases correction on a scene that contains a problematic object. You can simply execute it by running

python failure_correction.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. Also for this script, please make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and the user needs to select an object inside the room that will be located in an unnatural position. Given the scene with the unnatural position, our model identifies the problematic object and repositions it in a more plausible position.

In the output directory, the falure_correction.py script generates two folders in each run, one that contains the mesh files of the initial scene with the problematic object and another one that contains the mesh files of the new scene.

Training

Finally, to train a new network from scratch, we provide the train_network.py script. To execute this script, you need to specify the path to the configuration file you wish to use and the path to the output directory, where the trained models and the training statistics will be saved. Namely, to train a new model from scratch, you simply need to run

python train_network.py path_to_config_yaml path_to_output_dir

Note that it is also possible to start from a previously trained model by specifying the --weight_file argument, which should contain the path to a previously trained model.

Note that, if you want to use the RAdam optimizer during training, you will have to also install to download and install the corresponding code from this repository.

We also provide the option to log the experiment's evolution using Weights & Biases. To do that, you simply need to set the --with_wandb_logger argument and of course to have installed wandb in your conda environment.

Relevant Research

Please also check out the following papers that explore similar ideas:

  • Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models pdf
  • Sceneformer: Indoor Scene Generation with Transformers pdf

atiss's People

Contributors

nazcaspider avatar paschalidoud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atiss's Issues

FileNotFoundError: [Errno 2] No such file or directory:

Every time I run the code below:
(atiss) bobinkim@Bobinui-MacbookAir scripts % python3 pickle_threed_future_dataset.py path_to_output_dir /Users/bobinkim/ATISS/dataset/3D-FRONT /Users/bobinkim/ATISS/dataset/3D-FUTURE-model /Users/bobinkim/ATISS/dataset/3D-FUTURE-model/model_info.json --dataset_filtering threed_front_bedroom

the error pops up like below:
`Applying threed_front_bedroom filtering
Loading dataset 6811 / 6812
Traceback (most recent call last):
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front_scene.py", line 339, in corners
bbox_vertices = np.load(self.path_to_bbox_vertices, mmap_mode="r")
File "/Users/bobinkim/miniforge3/envs/atiss/lib/python3.8/site-packages/numpy/lib/npyio.py", line 405, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bobinkim/ATISS/dataset/3D-FUTURE-model/e3560eb3-d4e1-4add-8b51-b3dd5ec6943b/bbox_vertices.npy'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front_scene.py", line 261, in raw_model
return trimesh.load(
File "/Users/bobinkim/miniforge3/envs/atiss/lib/python3.8/site-packages/trimesh/exchange/load.py", line 116, in load
) = parse_file_args(file_obj=file_obj,
File "/Users/bobinkim/miniforge3/envs/atiss/lib/python3.8/site-packages/trimesh/exchange/load.py", line 630, in parse_file_args
raise ValueError('string is not a file: {}'.format(file_obj))
ValueError: string is not a file: /Users/bobinkim/ATISS/dataset/3D-FUTURE-model/e3560eb3-d4e1-4add-8b51-b3dd5ec6943b/raw_model.obj

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "pickle_threed_future_dataset.py", line 127, in
main(sys.argv[1:])
File "pickle_threed_future_dataset.py", line 101, in main
scenes_dataset = ThreedFront.from_dataset_directory(
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front.py", line 179, in from_dataset_directory
return cls([s for s in map(filter_fn, scenes) if s], bounds)
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front.py", line 179, in
return cls([s for s in map(filter_fn, scenes) if s], bounds)
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/common.py", line 211, in inner
s = next(fs)(s)
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/common.py", line 108, in inner
return scene if scene.bbox[1][axis] <= max_size else False
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front_scene.py", line 473, in bbox
corners = np.vstack([corners, f.corners()])
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front_scene.py", line 341, in corners
bbox_vertices = np.array(self.raw_model().bounding_box.vertices)
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front_scene.py", line 273, in raw_model
v, f = load_obj(self.raw_model_path)
File "/Users/bobinkim/ATISS/scene_synthesis/datasets/threed_front_scene.py", line 37, in load_obj
fin = open(fn, 'r')
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bobinkim/ATISS/dataset/3D-FUTURE-model/e3560eb3-d4e1-4add-8b51-b3dd5ec6943b/raw_model.obj'`

The exact same error occurs when running preprocess.py as well.

This might be a simple and easy question. Feel free to share your way to resolve this issue or you opinion

The KL divergence

In the script evaluate_kl_divergence_object_category.py, the code for selecting the data is as follows:

parser.add_argument( "--splits", choices=[ "training", "validation" ], default="training", help="Split to evaluate" )

The choice is limited to the training set and validation set, whereas the test set needs to be selected during the evaluation process.
Considering the calculated values, using the training set yields values mentioned in the paper, which are < 0.01. In contrast, when using the test data, the KL values are > 0.01, the difference in values is approximately 5 to 10 times.

Type error trigered by generate_scenes.py

Your ideas about using transformers in this project is great, and I'm trying reproducing your project, but I got some obstacles.
Atfter traning, I run the following code:

python generate_scenes.py ../config/bedrooms_config.yaml /data/3D-generate /data/3D-FUTURE-pickle/threed_future_model_bedroom.pkl ../demo/floor_plan_texture_images --weight_file /data/weights/8A9ENPCMK/model_00100

and get erros:

Running code on cuda:0
Loaded 2354 3D-FUTURE models
Applying no_filtering filtering
Loaded 162 scenes with 21 object types:
Loading weight file from /data/weights/8A9ENPCMK/model_00100
0 / 10: Using the 147 floor plan of scene MasterBedroom-109561
Traceback (most recent call last):
  File "generate_scenes.py", line 266, in <module>
    main(sys.argv[1:])
  File "generate_scenes.py", line 192, in main
    bbox_params = network.generate_boxes(room_mask=room_mask)
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 227, in generate_boxes
    box = self.autoregressive_decode(boxes, room_mask=room_mask)
  File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 202, in autoregressive_decode
    F = self._encode(boxes, room_mask)
  File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 165, in _encode
    start_symbol_f = self.start_symbol_features(B, room_mask)
  File "/codes/ATISS/scene_synthesis/networks/autoregressive_transformer.py", line 93, in start_symbol_features
    room_layout_f = self.fc_room_f(self.feature_extractor(room_mask))
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/codes/ATISS/scene_synthesis/networks/feature_extractors.py", line 24, in forward
    return self._feature_extractor(X)
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torchvision/models/resnet.py", line 220, in forward
    return self._forward_impl(x)
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torchvision/models/resnet.py", line 203, in _forward_impl
    x = self.conv1(x)
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in forward
    return self._conv_forward(input, self.weight)
  File "/usr/local/anaconda3/envs/atiss/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 415, in _conv_forward
    return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

envs:
cuda 10

Issue with data preprocessing: MTL files do not contain Ns lines

Hi,

The material files provided with the meshes in the 3D-FUTURE dataset do no contain information about the specular exponents "Ns", leading to a failure of simple_3dviz when reading the textured meshes in preprocess_data.py :

Traceback (most recent call last):
  File "preprocess_data.py", line 271, in <module>
    main(sys.argv[1:])
  File "preprocess_data.py", line 258, in main
    renderables = get_textured_objects_in_scene(
  File "E:\users\PJB1\Code\ATISS\scripts\utils.py", line 144, in get_textured_objects_in_scene
    raw_mesh = TexturedMesh.from_file(model_path)
  File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\renderables\textured_mesh.py", line 300, in from_file
    mtl = read_material_file(mesh.material_file)
  File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\io\__init__.py", line 27, in read_material_file
    return {
  File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\io\material.py", line 25, in __init__
    self.read(filename)
  File "C:\Users\PJB1\Miniconda3\envs\atiss\lib\site-packages\simple_3dviz\io\material.py", line 113, in read
    self._Ns = float([
IndexError: list index out of range

Do you know an easy way to overcome this issue without having to locally make changes in the simple_3dviz library ?
Thanks !
Paul

How to create a custom layout properly?

Hi, I am currently trying to create a new floor layout by modifying the boxes.npz data from accepted scene ids, I had to overwrite the old boxes.npz file due to some scene id issues. I am not sure if this is the right step.

Some examples on the scene id issue; running object_suggestion on 'Bedroom-14', 'MasterBedroom-3875' and a custom scene id doesn't work. The code will randomly select another scene instead

Are there any steps I need to take before using certain scenes? Thank you.

preprocess_data.py ends without generating folders

Hi! I am trying to run the code on my window machine.
The preprocess_data.py code does run and generates the following prints.

Applying threed_front_livingroom filtering
Loading dataset 6812 / 6813
Loading dataset with 621 rooms
Saving training statistics for dataset with bounds: {'translations': (array([-5.67291869, 0.0375 , -5.71640158]), array([5.09667922, 3.35774051, 5.40485 ])),
'sizes': (array([0.03999 , 0.02000002, 0.0328435 ]), array([2.38027 , 1.770065, 1.322429])), 'angles': (array([-3.14159265]), array([3.14159265]))} to \processed\livingrooms\dataset_stats.txt
Applying threed_front_livingroom filtering
Loading dataset 6812 / 6813
{'translations': (array([-5.67291869, 0.0375 , -5.71640158]), array([5.09667922, 3.35774051, 5.40485 ])), 'sizes': (array([0.03999 , 0.02000002, 0.02799703]), array([2.38027 , 1.770065 , 1.4137885])), 'angles': (array([-3.14159265]), array([3.14159265]))}
Loading dataset with 813 rooms
0it [00:00, ?it/s]

13it [00:00, 128.06it/s]

...

813it [00:06, 116.71it/s]

the number of iterations goes to 813 (the number of the living room) but ends without producing any processed files including the folder.

Anyone who faced similar issues?
It would be great to get some ideas.

Figure 11 Contradicts with Eq 8 - 11

Hi all,

I found it confusing when I look at Figure 11 on page 15. It uses a MLP (2 layers) and output dim is 64, but if it's predicting the parameters of the mixture of logistics, the output dim should be what's described in Eq. 8 - 11.

Can you help me understand the difference? And if the mixture of logistics is actually been used (given the code is not published yet)?

Thank you in advance!

Importance of room_side argument

In readme.md you mentioned room_side argument:
"This script starts by parsing all scenes from the 3D-FRONT dataset and then for each scene it generates a subfolder inside the path_to_output_dir that contains the information for all objects in the scene (boxes.npz), the room mask (room_mask.png) and the scene rendered using a top-down orthographic_projection (rendered_scene_256.png). Note that for the case of the living rooms and dining rooms you also need to change the size of the room during rendering to 6.2m from 3.1m, which is the default value, via the --room_side argument."
I inspected your code but yet not sure about importance of that parameter and actual scales in your model.
Is it true that room from default bedroom dataset with 64x64 generated mask but only 32x32 square of not zeros, have actual size 1.55x1.55 meters?
If I want to check your model on a custom room mask what scales/resolutions of the my room mask should I use for dining room and bedroom generation respectively?

Invalid Rooms

Hi,

The script pickle_threed_future_dataset and preprocess_data both require the parameter path_to_invalid_scene_ids, whose default value is ../config/invalid_threed_front_rooms.txt.

However, I found that some rooms not listed in invalid_threed_front_rooms.txt are also problematic. One example is MasterBedroom-58086, which is identified by converting the scene json to obj models and opening them in meshlab.

As invalid rooms would ruin the training process, how to easily identify these rooms and record the room ids in invalid_threed_front_rooms.txt?

Another question is how to identify the problematic objects and record jids in black_list.txt.

Thanks!

Custom floor layouts ?

Is it possible to use custom floor layouts with the trained model? If yes how can I do this? I've tried to manipulate the data in boxes.npz files but without any luck.

Easy overfitting to the dataset?

Hi,

I've been trying to replicate the results on the new dataset since the old version is no longer accessible. However, when I train the model on the bedroom scenes, the validation loss seems to drop for only the first several epochs and start to increase thereafter.

Could you provide some info such as the range the validation loss achieved during your experiments? My run can achieve the training loss range fairly consistently reported in #9 but still ends up overfitting the training examples.

Thanks in advance.

Best,
Jingyu

What is meant be SCENE_ID ?

I have done the preprocessing and tried the folder name of the output, and also the folder name of one of the 3D-FUTURE folders but nothing happens when I try to render a scene?

Preprocessing error: self.ctx_write_uniform, moderngl.error: invalid uniform size

I met error when executing preprocessing_data.py

Applying threed_front_bedroom filtering
Loading dataset  6812 /  6813
{'translations': (array([-2.7625005,  0.045    , -2.75275  ]), array([2.77844175, 3.6248396 , 2.81854277])), 'sizes': (array([0.0350001 , 0.02000002, 0.012772  ]), array([2.8682  , 1.770065, 1.698315])), 'angles': (array([-3.14159265]), array([3.14159265]))}
Loading dataset with 4004 rooms
2it [00:00,  2.05it/s]
Traceback (most recent call last):
  File "preprocess_data.py", line 271, in <module>
    main(sys.argv[1:])
  File "preprocess_data.py", line 261, in main
    render(
  File "/mnt/storage2/HJJeong/ATISS/scripts/utils.py", line 178, in render
    scene.add(r)
  File "/home/hj/anaconda3/envs/atiss/lib/python3.8/site-packages/simple_3dviz/scenes.py", line 60, in add
    renderable.init(self._ctx)
  File "/home/hj/anaconda3/envs/atiss/lib/python3.8/site-packages/simple_3dviz/renderables/textured_mesh.py", line 197, in init
    self.material = self._material
  File "/home/hj/anaconda3/envs/atiss/lib/python3.8/site-packages/simple_3dviz/renderables/textured_mesh.py", line 229, in material
    self._prog["ambient"].write(self._material.ambient.tobytes())
  File "/home/hj/anaconda3/envs/atiss/lib/python3.8/site-packages/_moderngl.py", line 95, in write
    self.ctx._write_uniform(
_moderngl.Error: invalid uniform size

I have already tried to the following solution, but they cannot solve my problem.

'''
self._Ns = float([
float(l.strip().split()[1:][0])
for l in lines if l.strip().startswith("Ns")
][0])
'''
self._Ns = 400

Please anyone help me.

Size missmatch in linear layer when training on Bedrooms

Hello!

First of all, thank you very much for publishing your code. I have preprocessed the data and want to start training, just as shown in the README. When I start training on bedroom room types, I am getting the following error:

Screenshot 2023-07-24 193805

This is happening at the "fc_class" linear layer in the "BaseAutoregressiveTransformer" module. The other room types work fine, and I am able to train with them (there is an overfitting issue though).
Do you know why the error above is happening? What can I change in the configuration file (i.e. network config) to fix it?

Thank you in advance for your help.

Best,
Munzer

None type in preprocess_data.py

Hello,

For some models, the category in the json is None, and preprocess_data.py does not handle that well, for example I get.

{'model_id': '00e4b7dd-08a6-4433-a439-856e4b5de58a', 'super-category': 'Others', 'category': None, 'style': 'Minimalist', 'material': 'Composition', 'theme': 'Gold Foil'}

Should this be handled by the blacklists? Or am I doing something wrong?

The old versiond dataset

The number of rooms filted out in the new version dataset is inconsistent with that in the paper, which is much less. Could you release the old version dataset?

problem with the /tmp/threed_front.pkl

I've tried to run the preprocess_data.py script. I met the error FileNotFoundError: [Errno 2] No such file or directory: '/tmp/threed_front.pkl'.
This is how I ran the preprocess_data.py script:

Loading dataset 6812 / 6813
Traceback (most recent call last):
File "pickle_threed_future_dataset.py", line 128, in
main(sys.argv[1:])
File "pickle_threed_future_dataset.py", line 102, in main
scenes_dataset = ThreedFront.from_dataset_directory(
File "C:\Users\user\Documents\Workspace\ATISS\scene_synthesis\datasets\threed_front.py", line 169, in from_dataset_directory
scenes = parse_threed_front_scenes(
File "C:\Users\user\Documents\Workspace\ATISS\scene_synthesis\datasets\utils.py", line 129, in parse_threed_front_scenes
pickle.dump(scenes, open("tmp\threed_front.pkl", "wb"))
OSError: [Errno 22] Invalid argument: 'tmp\threed_front.pkl'

Please help me.

The new version dataset

Hello,
When I ever try to run the script preprocess_data.py with the new version of the dataset,
The results contain overlapping output.
So if you re-generated invalid_threed_front_rooms.txt and black_list.txt on the new dataset version, if yes, can you upload it, please, or how can i re-generate them myself

Thanks.

Transltions order

def forward(self, sample_params):
    # Unpack the sample_params
    class_labels = sample_params["class_labels"]
    translations = sample_params["translations"]
    sizes = sample_params["sizes"]
    angles = sample_params["angles"]
    room_layout = sample_params["room_layout"]
    # shape (batch,length,dimension)
    B, _, _ = class_labels.shape
    # Apply the positional embeddings only on bboxes that are not the start
    # token
    class_f = self.fc_class(class_labels)
    # Apply the positional embedding along each dimension of the position
    # property
    pos_f_x = self.pe_pos_x(translations[:, :, 0:1])
    pos_f_y = self.pe_pos_x(translations[:, :, 1:2])
    pos_f_z = self.pe_pos_x(translations[:, :, 2:3])
    pos_f = torch.cat([pos_f_x, pos_f_y, pos_f_z], dim=-1)

Maybe here have some wrong order? In my dataset_stats.txt the bounds_translations "bounds_translations": [-2.762500499999998, 0.045, -2.7527500000000007, 2.778441746198965, 3.6248395981292725, 2.818542771063899],so I think maybe the z Coordinate cliped in [0.045,-3.624] ,Maybe the pos_f_y should actually be pos_f_z?

Dataset filtering: Only bedroom works

The error I got:

Traceback (most recent call last):
  File "./scripts/preprocess_data.py", line 272, in <module>
    main(sys.argv[1:])
  File "./scripts/preprocess_data.py", line 153, in main
    dataset = ThreedFront.from_dataset_directory(
  File "c:\users\mibig\desktop\atiss\scene_synthesis\datasets\threed_front.py", line 189, in from_dataset_directory
    return cls([s for s in map(filter_fn, scenes) if s], bounds)
  File "c:\users\mibig\desktop\atiss\scene_synthesis\datasets\threed_front.py", line 34, in __init__
    super().__init__(scenes)
  File "c:\users\mibig\desktop\atiss\scene_synthesis\datasets\common.py", line 52, in __init__
    assert len(scenes) > 0
AssertionError

I have ran the preprocess_data.py script according to GitHub instructions.

When I ran every filtering aside from bedroom, printing scenes out in threed_front.py and common.py returns an empty list [].

For the bedroom filtering, the console is able to print out a list of objects. For example:
(From threed_front.py)
[<scene_synthesis.datasets.threed_front_scene.Room object at 0x000001667FE651C0>, <scene_synthesis.datasets.threed_front_scene.Room object at 0x000001667FE96C10>]
(From common.py)
[<scene_synthesis.datasets.threed_front_scene.Room object at 0x000001667FE651C0>, <scene_synthesis.datasets.threed_front_scene.Room object at 0x000001667FE96C10>]

Does anyone have any idea on what causes this, or could fixed this problem?
Thank you.

Do a core-ML version

I'd love to be able to try the algorithm with no-rendering and all the madness, I'd love to see a minimal version with only the ML core like Dataset in and torch tensor out.

The special q hat token with dimension 512

Thanks for your wonderful work.
In the paper,the special q hat token with the dimension 64 and concated to predict other info.But in the code ,the special q hat token with wrong dimension 512.

errow when executing preprocess_data.py

as follows:

Applying threed_front_bedroom filtering
Loading dataset 6812 / 6813
Loading dataset with 3879 rooms
Saving training statistics for dataset with bounds: {'translations': (array([-2.7625005, 0.045 , -2.75275 ]), array([2.77844175, 3.6248396 , 2.81854277])), 'sizes': (array([0.03998288, 0.02000002, 0.012772 ]), array([2.8682 , 1.770065, 1.698315])), 'angles': (array([-3.14159265]), array([3.14159265]))} to path_to_output_dir/dataset_stats.txt
Applying threed_front_bedroom filtering
Loading dataset 6812 / 6813
{'translations': (array([-2.7625005, 0.045 , -2.75275 ]), array([2.77844175, 3.6248396 , 2.81854277])), 'sizes': (array([0.0350001 , 0.02000002, 0.012772 ]), array([2.8682 , 1.770065, 1.698315])), 'angles': (array([-3.14159265]), array([3.14159265]))}
Loading dataset with 4041 rooms
1it [00:00, 5.29it/s]
Traceback (most recent call last):
File "preprocess_data.py", line 271, in
main(sys.argv[1:])
File "preprocess_data.py", line 258, in main
renderables = get_textured_objects_in_scene(
File "/userhome3/wangzheng/ATISS-from-paper/ATISS-master/scripts/utils.py", line 138, in get_textured_objects_in_scene
raw_mesh = TexturedMesh.from_file(model_path)
File "/userhome3/wangzheng/anaconda3/envs/atiss-from-paper/lib/python3.8/site-packages/simple_3dviz/renderables/textured_mesh.py", line 300, in from_file
mtl = read_material_file(mesh.material_file)
File "/userhome3/wangzheng/anaconda3/envs/atiss-from-paper/lib/python3.8/site-packages/simple_3dviz/io/init.py", line 27, in read_material_file
return {
File "/userhome3/wangzheng/anaconda3/envs/atiss-from-paper/lib/python3.8/site-packages/simple_3dviz/io/material.py", line 25, in init
self.read(filename)
File "/userhome3/wangzheng/anaconda3/envs/atiss-from-paper/lib/python3.8/site-packages/simple_3dviz/io/material.py", line 113, in read
self._Ns = float([
IndexError: list index out of range

i presonally think this error happens when rendering objects in the room? but i have no idea how to fix.
i can get some results as the paste
Snipaste_2023-12-04_20-08-22

Segmentation Fault

When I run the command
python train_network.py path_to_config_yaml path_to_output_dir,
the program terminates and displays the error message "Segmentation Fault". This usually indicates that the program attempted to access a memory area that it does not have permission to access, which could be caused by a bug in the code.

floor_plan_texture_images issue

hello, I'm wondering if this folder exists already or if I have to create it and gather some textures, cause I can't find it anywhere and it's causing errors. Help me, please.

problem with the /tmp/threed_front.pkl file

I've managed to setup everything and tried to run the preprocess_data.py script. As the dataset was about to finish loading, I was met with the error FileNotFoundError: [Errno 2] No such file or directory: '/tmp/threed_front.pkl'.
This is how I ran the preprocess_data.py script:

python .\scripts\preprocess_data.py data\output\1 data\dataset\3D-FRONT data\dataset\3D-FUTURE-model data\dataset\3D-FUTURE-model\model_info.json \demo\floor_plan_texture_images --dataset_filtering threed_front_bedroom
Applying threed_front_bedroom filtering
Loading dataset  6812 /  6813
Traceback (most recent call last):
  File ".\scripts\preprocess_data.py", line 272, in <module>
    main(sys.argv[1:])
  File ".\scripts\preprocess_data.py", line 152, in main
    dataset = ThreedFront.from_dataset_directory(
  File "c:\users\mibig\desktop\atiss\scene_synthesis\datasets\threed_front.py", line 169, in from_dataset_directory
    scenes = parse_threed_front_scenes(
  File "c:\users\mibig\desktop\atiss\scene_synthesis\datasets\utils.py", line 129, in parse_threed_front_scenes
    pickle.dump(scenes, open("/tmp/threed_front.pkl", "wb"))

Is the threed_front.pkl file supposed to automatically generate, and did I miss any steps?

The same error shows when I tried to pickle the 3D-FUTURE dataset.

python .\scripts\pickle_threed_future_dataset.py data\output\pickledData data\dataset\3D-FRONT data\dataset\3D-FUTURE-model data\dataset\3D-FUTURE-model\model_info.json --dataset_filtering threed_front_bedroom
Applying threed_front_bedroom filtering
Loading dataset  6812 /  6813
Traceback (most recent call last):
  File ".\scripts\pickle_threed_future_dataset.py", line 127, in <module>
    main(sys.argv[1:])
  File ".\scripts\pickle_threed_future_dataset.py", line 101, in main
    scenes_dataset = ThreedFront.from_dataset_directory(
  File "c:\users\mibig\desktop\atiss\scene_synthesis\datasets\threed_front.py", line 169, in from_dataset_directory
    scenes = parse_threed_front_scenes(
  File "c:\users\mibig\desktop\atiss\scene_synthesis\datasets\utils.py", line 129, in parse_threed_front_scenes
    pickle.dump(scenes, open("/tmp/threed_front.pkl", "wb"))
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/threed_front.pkl'

Sorry if this is a stupid mistake, I am a beginner in python / conda stuff.

This is most likely caused by a broken X extension library

I use this commod

python generate_scenes.py /home/liufuqiang/Github/ATISS/config/bedrooms_config.yaml /home/liufuqiang/Github/ATISS/GenOut/ /home/liufuqiang/Github/ATISS/output/threed_future_model_bedroom.pkl /home/liufuqiang/Github/ATISS/demo/floor_plan_texture_images/ --weight_file /mnt/data/liufuqiang/OutPut/6OGD0E7ZK/model_09950

this is error
[xcb] Extra reply data still left in queue
[xcb] This is most likely caused by a broken X extension library
[xcb] Aborting, sorry about that.
python: ../../src/xcb_io.c:577: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed.

Question for generate_scenes.py

image

image
when I run generate_scenes.py,I alway get the error like the first picture ,it says my opengl version is 300. but my opengl version is 4.6.0.
Could you please tell me how to fix this?
Thanys a lot!

Hardware/training time/epochs etc

Hello,

I was wondering what is the expected time to train - the paper only mentions that you choose the best model from a very large number of iterations. What hardware did you use to train the model on and how long does that usually take, just to get a ballpark figure?

In the same vein, did you (or coauthors) use a different lr schedule, optimizer etc that seems to work better?

PS: Do you have any intentions of releasing the Kaolin scripts used to render the figures in the paper. They look very very nice!

bedrooms_config.yaml error

data:
dataset_type: "cached_threedfront"
encoding_type: "cached_autoregressive_wocm"
dataset_directory: "/media/paschalidoud/goproorgohome/3D_FRONT_processed/bedrooms"
annotation_file: "../config/bedroom_threed_front_splits.csv"
augmentations: ["rotations"]
filter_fn: "threed_front_bedroom"
train_stats: "dataset_stats.txt"
filter_fn: "no_filtering"
room_layout_size: "64,64"

Pretrained models

Hey!

Currently waiting to get access to the dataset, I have setup the code but was wondering where the pretrained models are located. I can't seem to find them anywhere?

Executing preprocess_data.py Data Size Missmatch

Hello,
i am getting this issue when executing preprocess_data.py the problem appears when i try to put the path to texture directory to the 3D-FRONT-texture folder which has this structure:

|- 3D-FRONT-texture
| |- 00cc8b1d-b284-4108-a48f-a18c320a9d3a.png
| |- 0b3653b4-8e36-4b16-a01b-7505251c66ae.png
| |- ....

and so on

this structure gives this error:

(senior) C:\Users\kenan\Desktop\master\scripts>python preprocess_data.py path_to_output_dir D:\Senior\3D-FRONT D:\Senior\3D-FUTURE-model D:\Senior\3D-FUTURE-model\model_info.json D:\test\3D-FRONT-texture --dataset_filtering threed_front_bedroom
Applying threed_front_bedroom filtering
Loading dataset 6812 / 6813
Loading dataset with 3879 rooms
Saving training statistics for dataset with bounds: {'translations': (array([-2.7625005, 0.045 , -2.75275 ]), array([2.77844175, 3.6248396 , 2.81854277])), 'sizes': (array([0.03998288, 0.02000002, 0.012772 ]), array([2.8682 , 1.770065, 1.698315])), 'angles': (array([-3.14159265]), array([3.14159265]))} to path_to_output_dir\dataset_stats.txt
Applying threed_front_bedroom filtering
Loading dataset 6812 / 6813
{'translations': (array([-2.7625005, 0.045 , -2.75275 ]), array([2.77844175, 3.6248396 , 2.81854277])), 'sizes': (array([0.0350001 , 0.02000002, 0.012772 ]), array([2.8682 , 1.770065, 1.698315])), 'angles': (array([-3.14159265]), array([3.14159265]))}
Loading dataset with 4041 rooms
2487it [2:13:16, 3.22s/it]
Traceback (most recent call last):
File "preprocess_data.py", line 271, in
main(sys.argv[1:])
File "preprocess_data.py", line 261, in main
render(
File "C:\Users\kenan\Desktop\master\scripts\utils.py", line 183, in render
scene.add(r)
File "C:\Users\kenan\anaconda3\envs\senior\lib\site-packages\simple_3dviz\scenes.py", line 60, in add
renderable.init(self._ctx)
File "C:\Users\kenan\anaconda3\envs\senior\lib\site-packages\simple_3dviz\renderables\textured_mesh.py", line 160, in init
self.material = self._material
File "C:\Users\kenan\anaconda3\envs\senior\lib\site-packages\simple_3dviz\renderables\textured_mesh.py", line 203, in material
self._texture = self.prog.ctx.texture(
File "C:\Users\kenan\anaconda3\envs\senior\lib\site-packages\moderngl_init
.py", line 1771, in texture
res.mglo, res._glo = self.mglo.texture(size, components, data, samples, alignment, dtype, internal_format or 0)
_moderngl.Error: data size mismatch 4194304 != 1048576


also before that i have this structure the original one is files inside many sub folders as follow:

|- 3D-FRONT-texture
| |- 00cc8b1d-b284-4108-a48f-a18c320a9d3a
| | |- texture.png
| |- 0b3653b4-8e36-4b16-a01b-7505251c66ae
| | |- texture.png
| |- ....

and so on

this structure when executing gives the error: PermissionError: [Errno 13] Permission denied

However when executing this command on demo file in this path ../demo/floor_plan_texture_images the code runs correctly but i think i should run the code on the whole textures from dataset not the example ones am i right?

_moderngl.Error: data size mismatch on 3D-FRONT data

I've tried to run the preprocess_data.py script. I met the error _moderngl.Error: data size mismatch 4194304 != 104857 on 3aa40ca2-8d84-4fc1-b2b8-ce2234c45f60 3D-FRONT json.

Traceback (most recent call last):
  File "preprocess_data.py", line 278, in <module>
    main(sys.argv[1:])
  File "preprocess_data.py", line 268, in main
    render(
  File "C:\Users\user\Documents\Workspace\ATISS\scripts\utils.py", line 199, in render
    scene.add(r)
  File "C:\Users\user\miniconda3\envs\atiss\lib\site-packages\simple_3dviz\scenes.py", line 60, in add
    renderable.init(self._ctx)
  File "C:\Users\user\miniconda3\envs\atiss\lib\site-packages\simple_3dviz\renderables\textured_mesh.py", line 160, in init
    self.material = self._material
  File "C:\Users\user\miniconda3\envs\atiss\lib\site-packages\simple_3dviz\renderables\textured_mesh.py", line 203, in material
    self._texture = self._prog.ctx.texture(
  File "C:\Users\user\miniconda3\envs\atiss\lib\site-packages\moderngl\__init__.py", line 1771, in texture
    res.mglo, res._glo = self.mglo.texture(size, components, data, samples, alignment, dtype, internal_format or 0)
_moderngl.Error: data size mismatch 4194304 != 1048576

I really don't know what is problem. Has anyone else had this problem? Please help me.

Positional Embedding in autoregressive_transformer.py

pe_pos have three (x, y, z) in BaseAutoregressiveTransformer class.
But, in the AutoregressiveTransformer forward function only pe_pos_x and pe_size_x used.
Is it right?

pos_f_x = self.pe_pos_x(translations[:, :, 0:1])
pos_f_y = self.pe_pos_x(translations[:, :, 1:2])
pos_f_z = self.pe_pos_x(translations[:, :, 2:3])
pos_f = torch.cat([pos_f_x, pos_f_y, pos_f_z], dim=-1)

size_f_x = self.pe_size_x(sizes[:, :, 0:1])
size_f_y = self.pe_size_x(sizes[:, :, 1:2])
size_f_z = self.pe_size_x(sizes[:, :, 2:3])

Generating new scene from model

Trained an initial model on 50 epochs, and wanted to test it out, using the generate_scenes.py script. Getting the following error when running the code, please help!

Command:
python3 generate_scenes.py ../config/bedrooms_config.yaml tester /tmp/threed_front.pkl demo/floor_plan_texture_images/ --weight_file models/OSL638YTV/model_00050

Output:
Running code on cpu
Applying no_filtering filtering
Loaded 17129 3D-FUTURE models
<class 'list'>
Applying no_filtering filtering
Loaded 162 scenes with 21 object types:
Loading weight file from models/OSL638YTV/model_00050
0 / 10: Using the 80 floor plan of scene SecondBedroom-36408
Traceback (most recent call last):
File "generate_scenes.py", line 276, in
main(sys.argv[1:])
File "generate_scenes.py", line 211, in main
renderables, trimesh_meshes = get_textured_objects(
File "/home/mil/Desktop/esoft/ATISS/scene_synthesis/utils.py", line 24, in get_textured_objects
furniture = objects_dataset.get_closest_furniture_to_box(
AttributeError: 'list' object has no attribute 'get_closest_furniture_to_box'

GLSL Compiler failed

Does anyone face the same problem as I did?

I'm running preprocess.py exactly like below:
PATH_TO_SCENES="/tmp/threed_front.pkl" python3 preprocess_data.py path_to_output_dir /Users/bobinkim/ATISS/dataset/3D-FRONT /Users/bobinkim/ATISS/dataset/3D-FUTURE-model /Users/bobinkim/ATISS/dataset/3D-FUTURE-model/model_info.json /Users/bobinkim/ATISS/demo/floor_plan_texture_images --dataset_filtering threed_front_bedroom

A few seconds later, a folder is created under path_to_output_dir. The folder has boxes.npz and room_mask.png. However immediately an error pops up in terminal. It says
(base) bobinkim@Bobinui-MacbookAir scripts % PATH_TO_SCENES="/tmp/threed_front.pkl" python3 preprocess_data.py path_to_output_dir /Users/bobinkim/ATISS/dataset/3D-FRONT /Users/bobinkim/ATISS/dataset/3D-FUTURE-model /Users/bobinkim/ATISS/dataset/3D-FUTURE-model/model_info.json /Users/bobinkim/ATISS/demo/floor_plan_texture_images --dataset_filtering threed_front_bedroom
No GUI library found. Simple-3dviz will be running headless only.
Applying threed_front_bedroom filtering
Loading dataset with 1630 rooms
Saving training statistics for dataset with bounds: {'translations': (array([-2.69074161, 0.0844895 , -2.75275 ]), array([2.75929532, 3.6248396 , 2.67008333])), 'sizes': (array([0.043739 , 0.02000002, 0.01548735]), array([2.8682 , 1.4 , 1.698315])), 'angles': (array([-3.14159265]), array([3.14159265]))} to path_to_output_dir/dataset_stats.txt
Applying threed_front_bedroom filtering
{'translations': (array([-2.69074161, 0.0844895 , -2.75275 ]), array([2.75929532, 3.6248396 , 2.67008333])), 'sizes': (array([0.043739 , 0.02000002, 0.01548735]), array([2.8682 , 1.4 , 1.698315])), 'angles': (array([-3.14159265]), array([3.14159265]))}
Loading dataset with 1694 rooms
11it [00:02, 3.94it/s]
Traceback (most recent call last):
File "/Users/bobinkim/ATISS/scripts/preprocess_data.py", line 271, in
main(sys.argv[1:])
File "/Users/bobinkim/ATISS/scripts/preprocess_data.py", line 261, in main
render(
File "/Users/bobinkim/ATISS/scripts/utils.py", line 178, in render
scene.add(r)
File "/Users/bobinkim/anaconda3/lib/python3.11/site-packages/simple_3dviz/scenes.py", line 60, in add
renderable.init(self._ctx)
File "/Users/bobinkim/anaconda3/lib/python3.11/site-packages/simple_3dviz/renderables/textured_mesh.py", line 46, in init
self._prog = ctx.program(
^^^^^^^^^^^^
File "/Users/bobinkim/anaconda3/lib/python3.11/site-packages/moderngl/init.py", line 1939, in program
res.mglo, res._members, res._subroutines, res._geom, res._glo = self.mglo.program(
^^^^^^^^^^^^^^^^^^
_moderngl.Error: GLSL Compiler failed

fragment_shader

ERROR: 0:38: Invalid call of undeclared identifier 'texture2D'
ERROR: 0:39: Use of undeclared identifier 'texColor'
ERROR: 0:40: Use of undeclared identifier 'texColor'
ERROR: 0:45: Invalid call of undeclared identifier 'texture2D'
ERROR: 0:46: Use of undeclared identifier 'bump_normal'

I think it is regarding mac os (ps. I'm using mac m1) but I don't know exactly what causes the issue and how to resolve this.

If there is anyone who runs into the same problem or has any opinion on this, please leave a comment.
It would help me a lot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.