Coder Social home page Coder Social logo

airbase's Introduction

PyPI version Downloads CI/CD Documentation Status pre-commit Code style: Ruff Checked with mypy

🌬 AirBase

An easy downloader for the AirBase air quality data.

AirBase is an air quality database provided by the European Environment Agency (EEA). The data is available for download at the portal, but the interface makes it a bit time consuming to do bulk downloads. Hence, an easy Python-based interface.

Read the full documentation at https://airbase.readthedocs.io.

πŸ”Œ Installation

To install airbase, simply run

$ pip install airbase

πŸš€ Getting Started

πŸ—Ί Get info about available countries and pollutants:

>>> import airbase
>>> client = airbase.AirbaseClient()
>>> client.all_countries
['GR', 'ES', 'IS', 'CY', 'NL', 'AT', 'LV', 'BE', 'CH', 'EE', 'FR', 'DE', ...

>>> client.all_pollutants
{'k': 412, 'CO': 10, 'NO': 38, 'O3': 7, 'As': 2018, 'Cd': 2014, ...

>>> client.pollutants_per_country
{'AD': [{'pl': 'CO', 'shortpl': 10}, {'pl': 'NO', 'shortpl': 38}, ...

>>> client.search_pollutant("O3")
[{'pl': 'O3', 'shortpl': 7}, {'pl': 'NO3', 'shortpl': 46}, ...

πŸ—‚ Request download links from the server and save the resulting CSVs into a directory:

>>> r = client.request(country=["NL", "DE"], pl="NO3", year_from=2015)
>>> r.download_to_directory(dir="data", skip_existing=True)
Generating CSV download links...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:03<00:00,  2.03s/it]
Generated 12 CSV links ready for downloading
Downloading CSVs to data...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12/12 [00:01<00:00,  8.44it/s]

πŸ’Ύ Or concatenate them into one big file:

>>> r = client.request(country="FR", pl=["O3", "PM10"], year_to=2014)
>>> r.download_to_file("data/raw.csv")
Generating CSV download links...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:12<00:00,  7.40s/it]
Generated 2,029 CSV links ready for downloading
Writing data to data/raw.csv...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2029/2029 [31:23<00:00,  1.04it/s]

πŸ“¦ Download the entire dataset (not for the faint of heart):

>>> r = client.request()
>>> r.download_to_directory("data")
Generating CSV download links...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40/40 [03:38<00:00,  2.29s/it]
Generated 146,993 CSV links ready for downloading
Downloading CSVs to data...
  0%|          | 299/146993 [01:50<17:15:06,  2.36it/s]

🌑 Don't forget to get the metadata about the measurement stations:

>>> client.download_metadata("data/metadata.tsv")
Writing metadata to data/metadata.tsv...

πŸš† Command line interface

$ airbase download --help
Usage: airbase download [OPTIONS]
  Download all pollutants for all countries

  The -c/--country and -p/--pollutant allow to specify which data to download, e.g.
  - download only Norwegian, Danish and Finish sites
    airbase download -c NO -c DK -c FI
  - download only SO2, PM10 and PM2.5 observations
    airbase download -p SO2 -p PM10 -p PM2.5

Options:
  -c, --country [AD|AL|AT|...]
  -p, --pollutant [k|CO|NO|...]
  --path PATH                     [default: data]
  --year INTEGER                  [default: 2022]
  -O, --overwrite                 Re-download existing files.
  -q, --quiet                     No progress-bar.
  --help                          Show this message and exit.

πŸ›£ Roadmap

  • Parallel CSV downloads Contributed by @avaldebe
  • CLI to avoid using Python all together Contributed by @avaldebe
  • Data wrangling module for AirBase output data

airbase's People

Contributors

avaldebe avatar heikoklein avatar johnpaton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

airbase's Issues

Drop "pl"/"shortpl" terminology in favor of something more readable

This terminology is only really used internally by the portal for generating URLs, it doesn't need to be exposed to the user.

The CSVs themselves use AirPollutant for the name (now: pl), AirPollutantCode for the URL containing the id (now: shortpl), and the portal refers to the "pollutant id" in the text.

The data dictionary calls them "pollutant notation" and "pollutant id"

use async request library

Hi John,

Have you considered to use an asynchronous request library?
Replacing requests by aiohttp would give you concurrent downloads,
like in the 3rd implementation shown on this article.

Would you accept a PR?
I have no experience with async python, but I'm willing to give it a try.

Cheers,
Álvaro.

PyPI release action needs verified email address

@JohnPaton
looks like PyPI increased the security since the last release,
which caused the release of a new version from CI to fail

ERROR    HTTPError: 400 Bad Request from https://upload.pypi.org/legacy/        
         User 'johnpaton' does not have a verified primary email address. Please
         add a verified primary email before attempting to upload to PyPI. See  
         https://pypi.org/help/#verified-email for more information.    

https://github.com/JohnPaton/airbase/actions/runs/9676343872/job/26695754041#step:7:29

Switch to setuptools_scm for versioning

  • Switch to using setuptools_scm for the package versioning
  • CI should release to PyPI on any new tag (#13)
  • Should start writing a __version__.py (or similar) inside the package for easily checking the version from Python

Make search_pollutant and pollutants_per_country output consistent with all_pollutants

From the readme:

>>> import airbase
>>> client = airbase.AirbaseClient()
>>> client.all_countries
['GR', 'ES', 'IS', 'CY', 'NL', 'AT', 'LV', 'BE', 'CH', 'EE', 'FR', 'DE', ...

>>> client.all_pollutants
{'k': 412, 'CO': 10, 'NO': 38, 'O3': 7, 'As': 2018, 'Cd': 2014, ...

>>> client.pollutants_per_country
{'AD': [{'pl': 'CO', 'shortpl': 10}, {'pl': 'NO', 'shortpl': 38}, ...

>>> client.search_pollutant("O3")
[{'pl': 'O3', 'shortpl': 7}, {'pl': 'NO3', 'shortpl': 46}, ...

It would be more intuitive if all of these had the same format (either the list of pl/shortpl dicts, or the big pl:shortpl dict). The current mix is confusing.

This will be a breaking change though so should be handled with care.

No connection adapters were found

Hi,

I very much like this tool to extract information from the Airbase database. Thanks for this.
As a test, I was trying to select data from Belgium, for eg. O3 since 2018.

I tried both the download_to_file (my preference) and download_to_directory extensions.

Yet both fail at the point of 23/70. With the following error:

File "/home/demuzmp4/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3291, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
r.download_to_file(ofile)
File "/home/demuzmp4/.local/lib/python3.6/site-packages/airbase/airbase.py", line 453, in download_to_file
r = requests.get(url)
File "/home/demuzmp4/.local/lib/python3.6/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/home/demuzmp4/.local/lib/python3.6/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/home/demuzmp4/.local/lib/python3.6/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/home/demuzmp4/.local/lib/python3.6/site-packages/requests/sessions.py", line 640, in send
adapter = self.get_adapter(url=request.url)
File "/home/demuzmp4/.local/lib/python3.6/site-packages/requests/sessions.py", line 731, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'https://ereporting.blob.core.windows.net/downloadservice/BE_6001_42519_2018_timeseries.csv'

They do fail at the same URL, while this file can be downloaded via a browser.

To me it is not clear why this fails. I also tried with raise_for_status=False, but the problem persists.

Would it make sense that the download functions include a try error statement, allowing them to continue even though it fails to retrieve a file?

Or perhaps there is another way on how to address this?

Cheers,
Matthias

cli request confuses "CO" with "Co"

when attempting to download CO data, no data is found

$ airbase pollutant CO --year 2023
Generating CSV download links...
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 40/40 [00:52<00:00,  1.32s/it]
Generated 0 CSV links ready for downloading
Downloading CSVs to data...
0it [00:00, ?it/s]

After some debut, I found out that this is a problem in typer (fastapi/typer#570).
The CO CLI option is interpreted as Co, which has no data...

Add cli for downloading

This can basically be a 1-1 map from #29 to a command line interface, to avoid needing to write any code to start downloading.

The specified blob does not exist. for url ....

Dear John,
I am trying to download Airbase data for 2019 and 2020.
import airbase
client = airbase.AirbaseClient()
client.all_countries
client.all_pollutants

for i in range(len(client.all_countries)):
    print (i)
    r = client.request(country=client.all_countries[i], pl=["O3","NO","NO2"],year_from=2019,update_date="2019-01-01 00:00:00")
    r.download_to_file("/path/raw_o3_no_no2_"f"{client.all_countries[i]}"".csv")

But I get this error

raceback (most recent call last):
  File "<stdin>", line 4, in <module>
  File "/home/srvx11/lehre/users/a1276905/.conda/envs/py36/lib/python3.6/site-packages/airbase/airbase.py", line 456, in download_to_file
    r.raise_for_status()??????????????????????????| 3/3 [00:11<00:00,  3.52s/it]
  File "/home/srvx11/lehre/users/a1276905/.conda/envs/py36/lib/python3.6/site-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)2/671 [05:08<43:40,  4.38s/it]
requests.exceptions.HTTPError: 404 Client Error: The specified blob does not exist. for url: https://ereporting.blob.core.windows.net/downloadservice/AT_7_48957_2020_timeseries.cs

Could you please let me know how to fix it?
Best regards,
Omid.

Implement functional API for dowloading

Like requests, create a new api that basically bypasses the client (by handling it internally) so that users can jump directly to download_*ing the data.

Can't download a year of data, getting IO-Error: Too many open files

Hi,
I tried to download a year (2022) of data with:

airbase download --path 2022/ --year 2022 -p SO2 -p PM10 -p O3 -p NO2 -p CO -p NO -p PM2.5

That are a total of ~18000 files and the script crashed with an IOError: too many open files. I checked the ulimit -Sn which was 1024.

I managed at the end to download the data by

  1. finding a server with a file-limit of 4096
  2. splitting the request into one request per component (max 4300 files per component)
  3. having ~20Gb per component memory

I didn't find a possibility to restrict the number of simultaneously opened files. A semaphore as in this example might be needed to reduce resource-usage: Tinche/aiofiles#83

Best regards,
Heiko

Support additional output types

Right now we only support CSV, which is what the portal provides. We could convert to other file formats (parquet, avro) on the fly for easier processing later.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.