Comments (7)
As an example, the following patch, avoids the creation of the _h5ds
on demand, (it keeps a had reference is).
diff --git a/h5netcdf/core.py b/h5netcdf/core.py
index 39b0b5a..cb7f2b1 100644
--- a/h5netcdf/core.py
+++ b/h5netcdf/core.py
@@ -114,6 +114,10 @@ class BaseVariable(object):
self._dimensions = dimensions
self._initialized = True
+ # Always refer to the root file and store not h5py object
+ # subclasses:
+ self._h5ds = self._root._h5file[self._h5path]
+
@property
def _parent(self):
return self._parent_ref()
@@ -122,12 +126,6 @@ class BaseVariable(object):
def _root(self):
return self._root_ref()
- @property
- def _h5ds(self):
- # Always refer to the root file and store not h5py object
- # subclasses:
- return self._root._h5file[self._h5path]
-
@property
def name(self):
"""Return variable name."""
diff --git a/h5netcdf/dimensions.py b/h5netcdf/dimensions.py
index d291731..d5bda4d 100644
--- a/h5netcdf/dimensions.py
+++ b/h5netcdf/dimensions.py
@@ -86,8 +86,13 @@ class Dimension(object):
else:
self._root._max_dim_id += 1
self._dimensionid = self._root._max_dim_id
- if parent._root._writable and create_h5ds and not self._phony:
+ if self._phony:
+ self._h5ds = None
+ elif parent._root._writable and create_h5ds:
+ # Create scale will create the self._h5ds object
self._create_scale()
+ else:
+ self._h5ds = self._root._h5file[self._h5path]
self._initialized = True
@property
@@ -131,12 +136,6 @@ class Dimension(object):
return False
return self._h5ds.maxshape == (None,)
- @property
- def _h5ds(self):
- if self._phony:
- return None
- return self._root._h5file[self._h5path]
-
@property
def _isscale(self):
return h5py.h5ds.is_scale(self._h5ds.id)
@@ -183,6 +182,7 @@ class Dimension(object):
dtype=">f4",
**kwargs,
)
+ self._h5ds = self._root._h5file[self._h5path]
self._h5ds.attrs["_Netcdf4Dimid"] = np.array(self._dimid, dtype=np.int32)
if len(self._h5ds.shape) > 1:
but brings down from 500 ms (on my computer) to 22 ms
from h5netcdf.
Relevant discussion: #117
from h5netcdf.
@hmaarrfk Thanks for providing all that information and your engagement to make h5netcdf more performant. I've taken a first step to minimize re-reading from underlying h5py/h5df in #196. The changes are minor, but have a huge effect on the presented use-case.
I've tried your patch here, but it broke the test suite. I'll look deeper into this the next days.
from h5netcdf.
Thank you! My patch was more a proof of concept that h5netcdf
doesn't have to be slow. I was trying to get pympler
installed to recreate the cyclic dependency graphs before commenting further.
I also think that we (personally) need to tune our file structure. I remember reading about HDF5 metadata needing to be tuned for "large" files.
Were you able to recreate similar performance trends with a newly created hand crafted file? We likely generated the original netcdf4 with netcdf4-c
(primarily due to performance concerns with h5netcdf).
from h5netcdf.
@hmaarrfk Do you think it would be useful to create several datasets from scratch with netCDF4-python/h5netcdf? We could build a benchmark suite with your and others specific use cases from that, also including examples from the past. At least this would give us some comparison between different approaches.
from h5netcdf.
We could build a benchmark suite
Building benchmarking infrastructure is non-trivial. I'm happy to help creating the datasets, and "donating" some "real world" dataset.
from h5netcdf.
For this particular case, I found that it isn't dependent on my own nc file
import h5netcdf
from pathlib import Path
from time import perf_counter, sleep
import tempfile
with tempfile.TemporaryDirectory() as tmpdir:
filename = Path(tmpdir) / 'h5netcdf_test.nc'
h5nc_file = h5netcdf.File(filename, 'w')
h5nc_file.dimensions = {'x': 1024}
x = h5nc_file.dimensions['x']
time_start = perf_counter()
for i in range(1000):
len(x)
time_end = perf_counter()
time_elapsed = time_end - time_start
print(f"{time_elapsed:03.3f} --- Time to compute length 1000 times: ")
time_start = perf_counter()
for i in range(1000):
x._h5ds
time_end = perf_counter()
time_elapsed = time_end - time_start
print(f"{time_elapsed:03.3f} --- Time to access h5ds reference 1000 times")
time_start = perf_counter()
for i in range(1000):
x._root._h5file[x._h5path]
time_end = perf_counter()
time_elapsed = time_end - time_start
print(f"{time_elapsed:03.3f} --- Time to create h5ds reference 1000 times")
h5nc_file.close()
# %% on main
# 0.053 --- Time to compute length 1000 times:
# 0.046 --- Time to access h5ds reference 1000 times
# 0.046 --- Time to create h5ds reference 1000 times
# %% On https://github.com/h5netcdf/h5netcdf/pull/197
# 0.003 --- Time to compute length 1000 times:
# 0.000 --- Time to access h5ds reference 1000 times
# 0.034 --- Time to create h5ds reference 1000 times
from h5netcdf.
Related Issues (20)
- Can numpy objects be supported? HOT 5
- h5netcdf writes invalid netcdf to existing netcdf files HOT 6
- Corrupted headers when serialising using dask.distributed client HOT 9
- Test failures with NetCDF 4.9.0 HOT 6
- Remove h5py2 related code and CI builds HOT 3
- FAILED h5netcdf/tests/test_h5netcdf.py::test_group_names HOT 6
- AttributeError for '_phony_dim_count' when trying to convert a file made with h5py HOT 12
- Tests test_more_than_7_attr_creation_track_order and test_bool_slicing_length_one_dim fail in the test suite HOT 18
- very slow partial reading when saved with index shift HOT 10
- h5py minimum version update? HOT 5
- Documentation request: Alternative way to obtain h5netcdf HOT 1
- Segmentation fault after upgrading to h5netcdf==1.1.0 HOT 14
- ValueError raised when attribute has type `h5py.Reference` HOT 5
- support for HDF5 dimension scales with null dataspace HOT 2
- Modifying attributes safely is not possible with all datasets. HOT 10
- Better Error for illegal variable names HOT 3
- md5 checksum mismatch for identical files/data HOT 7
- Question: does `h5netcdf` bring in the entire data from a netCDF file on a remote disk (like S3)? HOT 5
- Provide an example with time dimension readable by paraview HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from h5netcdf.