Comments (29)
I try to get DirectXML running.
https://github.com/martinet101/win32mica/blob/6ec96560c75e11d97b38f86c01a2f6068836d010/src/win32mica/__init__.py
if sys.platform == "win32" and sys.getwindowsversion().build >= 22000:
...
else:
print(f"Win32Mica Error: {sys.platform} version {sys.getwindowsversion().build} is not supported")
return 0x32
So MICA is only supported on Windows 11.
from qualityscaler.
According to CUDA 11 minor version compatibility:
https://docs.nvidia.com/deploy/cuda-compatibility/index.html#minor-version-compatibility
You should strive to minimize the CUDA-11-versions included (in Torch and right away), only upgrade if new 11 features are really adding value.
If I have a 11.4 driver it should work with versions 11.1 and 11.3 as well. (Didn't try it yet).
from qualityscaler.
The max I can go is the x3. Those results are better readable.
from qualityscaler.
I thought I would do the same excercise for pytorch-directml compiled with CUDA and looked up some wheel for it.
However I can only find very tiresome compile-it-yourself-procedures with a very specific old version of Visual Studio (to download with MSDN?) and a lot of manual installs like CUDNN, which even needs a NVIDIA-account.
The official pytorch-directml-page has some shady build-states:
https://pypi.org/project/pytorch-directml/
The only supported platforms seem to be Jetson and WSL.
from qualityscaler.
To install torch with cuda look at this page https://pytorch.org/get-started/locally/
You need to select os, cuda version, and python package manager
from qualityscaler.
from qualityscaler.
You have to install python 3.8.10
from qualityscaler.
After several trials of installing Python 3.8.10 with tcl/tk, running pip install -r requirements.txt I now have:
from qualityscaler.
I commented out mica, the upscale error remains. No clear hints in the log.
from qualityscaler.
Probably the problem is that the version 2.2 use pytorch-directml that use Directx12 not cuda.
Try versions < 2.0 or just modify the code of 2.2 where the device = "dml" -> device = "cuda"
from qualityscaler.
Ah sorry forgot to say, the AI models are not bundled on github because they are too heavy, you can download it here https://drive.google.com/drive/folders/13kfr3qny7S2xwG9h7v95F5mkWs0OmU0D __> BSRGAN.pth and RealSR_JPEG.pth
from qualityscaler.
I tried version 1.5.0 against driver 11.4 and it seems to work, however it continues working after telling me it's finished.
from qualityscaler.
from qualityscaler.
Probably the problem is that the version 2.2 use pytorch-directml that use Directx12 not cuda.
Try versions < 2.0 or just modify the code of 2.2 where the device = "dml" -> device = "cuda"
I think you mean reverting this commit:
66b6f13
from qualityscaler.
don t revert the entire commit, just modify 2.2 where there is "dml" string with cuda; so you will use all new feature came with 2.2 but with cuda backend
from qualityscaler.
from qualityscaler.
If you are using the script .py you can choose wich backend you want just installing the library and modifying the code.
Yes, in the .zip there is only Pythorc-directml, no cuda libraries.
from qualityscaler.
I globally replaced "dml" to "cuda", but during the run there is no detailed error pointing to the line in which a missing library is suggested. I also commented out the mica-lines and put the trained models in the directory.
I also tried to change import torch for import torch.cuda, but no detailed errors...
from qualityscaler.
strange, maibe there is something broken in pip packages installed. You can try to clean all packages installed and install everything again:
- pip freeze > unistall.txt
- pip uninstall -y -r unistall.txt
- pip install -r requirements.txt --upgrade
pay attention that you can only install one Pytorch version (or pytorch or pytorch-directml)
Use this requirements.txt, then install pytorch
4. pip3 install torch --extra-index-url https://download.pytorch.org/whl/cu113
requirements.txt
from qualityscaler.
from qualityscaler.
No luck.
I previously removed pytorch-directml and mica from the original requirements.txt
This is the result of your steps:
(qs) C:\Users\rmast\QualityScaler>pip list
Package Version
------------------------- ------------
altgraph 0.17.2
certifi 2022.6.15
charset-normalizer 2.1.0
colorama 0.4.5
decorator 4.4.2
engineering-notation 0.6.0
future 0.18.2
idna 3.3
imageio 2.19.3
imageio-ffmpeg 0.4.7
moviepy 1.0.3
numpy 1.23.0
opencv-python-headless 4.5.5.64
pefile 2022.5.30
Pillow 9.2.0
pip 21.2.2
proglog 0.1.10
pyinstaller 5.1
pyinstaller-hooks-contrib 2022.7
pypiwin32 223
python-tkdnd 0.2.1
pywin32 304
pywin32-ctypes 0.2.0
requests 2.28.1
setuptools 61.2.0
sv-ttk 0.1
tk-tools 0.16.0
torch 1.12.0+cu113
tqdm 4.64.0
ttkwidgets 0.12.1
typing_extensions 4.3.0
urllib3 1.26.9
wheel 0.37.1
wincertstore 0.2
WMI 1.5.1
Only seconds after pushing the button, no matter what factor 1x, 2x, or whatever an error and no activity in the directory or behind nvidia-smi
from qualityscaler.
You can also try Pytorch LTS with cuda 10:
pip3 install torch==1.8.2 torchvision==0.9.2 torchaudio===0.8.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu102
from qualityscaler.
from qualityscaler.
You can use VsCode, it's much better
from qualityscaler.
I'll try VsCode another time, it must be good as well. I got a CUDNN-error from upscale_image_and_save that went to an exception which wasn't caught before and not written in line 1070. The right way to get that exact exception is
except Exception as e:
write_in_log_file( "<p>Error: %s</p>" % str(e) )
I see the log file gets cleared by every line written in it. You would probably want a second logfile logging what's going on if you really need a log with a single predefined line in it. You could also write the details to the popup.
I found the CUDNN-stuff in the torch directory.
I replaced my torch-directory with the torch-directory contained in the 1.5.0-version and now it works with the cuda 11.4 driver! I had to global replace both 'dml' and "dml" by 'cuda' and "cuda".
If I want a 4x upscale I take a segment from a previous upscale and put that in de process, only then the 4x upscale works. So the needed memory size estimation needs adjustment as well.
from qualityscaler.
Yes, the only way to communicate between the upscale process (the one that know what is happening) and the gui process (the one can modify the gui and write the little yellow message) is using a log file.
Wow! Great!
If i remember well the VRAM calculus depends on torch.cuda.getMemory() somethiung like that.
In my personal GPU with some tests i saw that a max 600px image with AI model consumes 6GB of Vram, so i assumed that:
300px _> 3 Gb
500px _> 5 Gb
1000px > 10 Gb
and so on, dinamically calculated in base on image and memory. If the image is bigger than memory limit the App cuts it in 4 or 9 or 16 parts etc
from qualityscaler.
Yes, the only way to communicate between the upscale process (the one that know what is happening) and the gui process (the one can modify the gui and write the little yellow message) is using a log file.
I've seen things like in-memory-streams for communication between threads.
from qualityscaler.
For sure there are better methods but I try to devote as little as possible to my outside projects because my main job already steals a lot of the day's time and devoting more than an hour would drive me crazy haha.
from qualityscaler.
Yeah, a little too complicated
from qualityscaler.
Related Issues (20)
- Program often freezes during upscaling long video HOT 4
- Upscale Error: RealESRGANx4.onnx missing
- How to download video after its complete upscaling? HOT 2
- ONNXRuntime Error HOT 3
- Where are the model download links? HOT 4
- Stucked on Processing upscaled video HOT 7
- not using gpu HOT 30
- This verion of exiftool is not compatible HOT 6
- MoviePy error: FFMPEG encountered the following error while writing file HOT 4
- Getting started HOT 8
- Is it possible to produce less bit rate for video output? HOT 2
- Video upscale only exporting audio HOT 5
- Can't download AI-onnx.zip HOT 3
- Adding our own AI upscale models HOT 3
- [ONNXRuntime]: 6 : RUNTIME_EXCEPTION: Non-zero status code returned while running Add mode. HOT 2
- Error during upscale process HOT 2
- 'utf-8" codec can't decode byte 0x8c in position 242: invalid start byte HOT 1
- Multi-GPU support? HOT 1
- The models Link expired or deleted HOT 2
- message: "Import \"natsort\" could not be resolved - along with 9 other missing imports HOT 9
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from qualityscaler.