ehoogeboom / hexaconv Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
When converting the cartesian coordinates to hexagonal, it's more convenient but not necessary to let both coordinate system have same origin and alin one of the axis, for example aligning n1 with x. However, how to choose the unit length? In the implementation of hexaconv/groupy/hexa/hexa_sample.py and groupy.girds.hexa_lattice.py, they assume both coordinates have the same unit length. If we half the unit length in hexagonal coordinates, them the sampling points will increase to about 4 times. What's the influence of unit length on performance?
Is G-Pooling included in this implementation?
When resampling the input image, the hexagonal lattice has fractional coordinates in cartesian coordinate system. So we need interpolation to get corresponding value. In chainer implementation, we can call scipy.interpolate.interpn or scipy.ndimage.interpolation.map_coordinates. In tensorflow, I only find some functions such as tf.image.resize_images and tf.image.resize_bicubic, both of which only deal with integer coordinates. So, how can I interpolate an image in tensorflow implementation?
Hi ehoogeboom,
Recently while working on a new project that involves hexagonal grid we started facing issue when I tried to project the network on a 2D plane. As a work around I took different features mapped them onto 2D projections and stacked them on top of each other.
While researching I came across your paper and decided to give it a try, however there is no pytorch implementation for it and given my priorities I might try to code one up. Do you have any suggestions on how to start and what pieces should I focus on.
Best,
Yash
I received the following errors:
`{'datadir': '/workspace/hexaconv-master/experiments/CIFAR10/DataCifar', 'resultdir': '/workspace/hexaconv-master/experiments/CIFAR10/DataCifarResults', 'modelfn': '/workspace/hexaconv-master/experiments/CIFAR10/models/P4WideResNet.py', 'trainfn': 'train_all.npz', 'valfn': 'test.npz', 'epochs': 300, 'batchsize': 125, 'opt': 'MomentumSGD', 'opt_kwargs': {'lr': 0.05}, 'net_kwargs': {}, 'weight_decay': 0.001, 'lr_decay_schedule': '50-100-150', 'lr_decay_factor': 0.1, 'transformations': '', 'val_freq': 40, 'save_freq': 100, 'gpu': 0, 'seed': 0, 'hex_sampling': ''}
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 241, in compile
nvrtc.compileProgram(self.ptr, options)
File "cupy/cuda/nvrtc.pyx", line 98, in cupy.cuda.nvrtc.compileProgram
File "cupy/cuda/nvrtc.pyx", line 108, in cupy.cuda.nvrtc.compileProgram
File "cupy/cuda/nvrtc.pyx", line 53, in cupy.cuda.nvrtc.check_status
cupy.cuda.nvrtc.NVRTCError: NVRTC_ERROR_COMPILATION (6)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train_cifar.py", line 291, in
val_error, model = train(logme=vargs, **vargs)
File "train_cifar.py", line 154, in train
model, optimizer = get_model_and_optimizer(resultdir, modelfn, opt, opt_kwargs, net_kwargs, gpu)
File "train_cifar.py", line 46, in get_model_and_optimizer
module = imp.load_source(model_name, modelfn)
File "/opt/conda/lib/python3.6/imp.py", line 172, in load_source
module = _load(spec)
File "", line 684, in _load
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "/workspace/hexaconv-master/experiments/CIFAR10/models/P4WideResNet.py", line 8, in
from groupy.gconv.gconv_chainer.p4_conv import P4ConvZ2, P4ConvP4
File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/p4_conv.py", line 1, in
from groupy.gconv.gconv_chainer.splitgconv2d import SplitGConv2D
File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/splitgconv2d.py", line 10, in
from groupy.gconv.gconv_chainer.TransformFilter import TransformGFilter
File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/TransformFilter.py", line 8, in
from groupy.gconv.gconv_chainer.kernels.integer_indexing_cuda_kernel import grad_index_group_func_kernel
File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/kernels/integer_indexing_cuda_kernel.py", line 61, in
_index_group_func_kernel32 = compile_with_cache(_index_group_func_str.format('float')).get_function('indexing_kernel')
File "cupy/core/carray.pxi", line 125, in cupy.core.core.compile_with_cache
File "cupy/core/carray.pxi", line 146, in cupy.core.core.compile_with_cache
File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 164, in compile_with_cache
ptx = compile_using_nvrtc(source, options, arch)
File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 82, in compile_using_nvrtc
ptx = prog.compile(options)
File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 245, in compile
raise CompileException(log, self.src, self.name, options)
cupy.cuda.compiler.CompileException: /tmp/tmp_vh4y1f6/kern.cu(14): error: a value of type "const ptrdiff_t *" cannot be used to initialize an entity of type "const int *"
/tmp/tmp_vh4y1f6/kern.cu(15): error: a value of type "const ptrdiff_t *" cannot be used to initialize an entity of type "const int *"
2 errors detected in the compilation of "/tmp/tmp_vh4y1f6/kern.cu".
`
Hello ehoogeboom,
Thanks for your wonderful work on hexaconv. I use tensorflow more frequently. Is there a tensorflow implementation of hexaconv?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.