Comments (6)
Hello @pablosreyero,
Could you try structuring your code in the following way and let us know if the problem still occurs:
import gc
def iteration(...):
y = channel([x, h_freq, no])
b_hat = pusch_receiver([y, no])
BER = compute_ber(b, b_hat)
return BER.numpy()
def main():
for it_i in range(...):
BER_np = iteration(...)
# Use BER_np as needed
del BER_np
gc.collect()
The key thing is that all per-iteration variables must go out of scope before calling the garbage collector.
If this works, you can call the garbage collector less often to reduce the overhead (e.g. once every 500 iterations).
from sionna.
Hi,
I had a quick look at your code.
First of all, it seems that you want to simulate 1 single transmitter sending the same stream to 5 receivers.
However, you only configure a single PUSCHReceiver. So something is wrong in your setup. I have also some doubts that this is a typical PUSCH scenario.
Could it be that you actually want to simulate a distributed MIMO receiver? If this is the case, you would simply need to reshape the tensor of the channel frequency response from [batch_size, num_rx, num_rx_ant,...] to [batch_size, 1, num_rx* num_rx_ant,...].
This will probably not solve the memory issue. However, I would recommend that you run your simulations in graph mode. This might resolve it. In any case, it should substantially speed-up your simulations, even on CPU.
from sionna.
Hello @pablosreyero,
Could you try structuring your code in the following way and let us know if the problem still occurs:
import gc def iteration(...): y = channel([x, h_freq, no]) b_hat = pusch_receiver([y, no]) BER = compute_ber(b, b_hat) return BER.numpy() def main(): for it_i in range(...): BER_np = iteration(...) # Use BER_np as needed del BER_np gc.collect()The key thing is that all per-iteration variables must go out of scope before calling the garbage collector.
If this works, you can call the garbage collector less often to reduce the overhead (e.g. once every 500 iterations).
Hello @merlinND,
Thank you very much for your reply. We have already tested both the garbage collector and the del
statement with multiple variables in the past, but it did not help out. However, I reproduced the exact same code structure you have provided, and I obtained the same results. I attach the new code and a log file with the evolution of the RAM (memory-wise) in the zip
file.
Thanks again for your help.
Pablo.-
https://github.com/NVlabs/sionna/files/14603871/files.zip
from sionna.
Hi,
I had a quick look at your code.
First of all, it seems that you want to simulate 1 single transmitter sending the same stream to 5 receivers. However, you only configure a single PUSCHReceiver. So something is wrong in your setup. I have also some doubts that this is a typical PUSCH scenario.
Could it be that you actually want to simulate a distributed MIMO receiver? If this is the case, you would simply need to reshape the tensor of the channel frequency response from [batch_size, num_rx, num_rx_ant,...] to [batch_size, 1, num_rx* num_rx_ant,...].
This will probably not solve the memory issue. However, I would recommend that you run your simulations in graph mode. This might resolve it. In any case, it should substantially speed-up your simulations, even on CPU.
Hello @jhoydis,
Thanks for pointing out the error regarding my scenario, you are totally right, I rushed and copied an old version of my code to just reproduce the RAM issue in a smaller code. If I'm not mistaken you already mentioned this tensor reshape in another discussion (#269) and that's how I noticed, so thanks again for the reminder.
Now coming back to the RAM issue, I have tried everything: garbage collectors, del
statements (at the end of every iteration), limit memory usage, convert .ipynb
to .py
and run the code from the terminal (without JupyterNotebook), analyze variables and objets with a python profiler, but nothing seems to unveil the problem. This MEM consumption is encountered when running simulations without a keras model and in CPU, and even though Sionna is meant to be run in a keras layer and in GPU, it is really weird to see how RAM slowly fades away, like if something was accumulating in memory.
Thanks for your help and for bringing the worlds of AI and Wireless Communications even closer together with Sionna!
Pablo.-
from sionna.
Have you tried running your simulations in graph mode?
from sionna.
Have you tried running your simulations in graph mode?
Not yet, but I'm going to do so. I'll let you know if we encounter any other errors.
from sionna.
Related Issues (20)
- llvm init issue after update fedora38->fedora39 HOT 3
- RT data structure HOT 8
- Compatibility between Tensorflow 2.10-2.12 and Sionna 0.16.2 HOT 2
- Sionna RT: question on scatt random phases HOT 4
- Errors when using materials different than itu_concrete HOT 2
- confused on perfect CSI simulation result HOT 4
- How to calculate the refracted electromagnetic field HOT 1
- sionnaRT: slow GPU computation times HOT 8
- Hello World Example in 5G NR PUSCH Tutorial does not take channel into account. HOT 2
- New Radiation Pattern HOT 2
- Error when using interferers in end-to-end PUSCH Simulations model HOT 1
- Channel Impulse Response unexpected values HOT 5
- Sionna RT result error HOT 6
- Channel Impulse Response moving scenario
- How do I customize the pattern of an antenna? HOT 1
- How to Retrieve Reflection Coefficients and Determine Amplitude Attenuation in Sionna Ray Tracer
- Mobility notebook crashes on Google Colab with CPU runtime
- Only import sionna and mitsuba work on Mac and on Arm N1, mitsuba is not even compiling. HOT 2
- In NR PUSCH the longer CP is applied to the first slot in frame instead of first symbol in slot HOT 1
- Error when using @tf.function in release 0.17.0 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sionna.