Coder Social home page Coder Social logo

polca / premise Goto Github PK

View Code? Open in Web Editor NEW
105.0 15.0 46.0 1.3 GB

Coupling Integrated Assessment Models output with Life Cycle Assessment.

License: BSD 3-Clause "New" or "Revised" License

Python 40.24% Jupyter Notebook 59.76%
lifecycle energy ecoinvent transport inventory

premise's People

Contributors

b-maes avatar brianlcox avatar charpprecht avatar cmutel avatar kais-siala avatar loisel avatar m-rossi avatar marc-vdm avatar romainsacchi avatar simb-sdu avatar stew-mcd avatar timodiepers avatar tngtudor avatar tomterlouw avatar vtulus avatar xiaoshir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

premise's Issues

InvalidLink: Exchange links to non-existent activity

Hi all,
I am running premise using an ecoinvent 3.6 ecospold database. Everything is working fine when creating new databases using REMIND SSP2_NDC scenario for many different years and then writing them to Brightway2 format. However, when using REMIND SSP2_Base, NPi, and PkBudg scenario, an error appears when writing the new database to Brightway2. It is something related to exchange links to a non-existent activity. In case you have any idea how to solve this, I would appreciate it a lot. Thanks again.

....
Relink new steel production datasets to steel-consuming activities
Write new database(s) to Brightway2.
One or multiple duplicates detected. Removing them...

InvalidLink Traceback (most recent call last)
in
12
13 ndb.update_all()
---> 14 ndb.write_db_to_brightway()

~\Anaconda3\envs\myenv\lib\site-packages\premise\ecoinvent_modification.py in write_db_to_brightway(self, name)
843 scenario["database"] = self.check_for_duplicates(scenario["database"])
844
--> 845 wurst.write_brightway2_database(
846 scenario["database"], name[s],
847 )

~\Anaconda3\envs\myenv\lib\site-packages\wurst\brightway\write_database.py in write_brightway2_database(data, name)
49 change_db_name(data, name)
50 link_internal(data)
---> 51 check_internal_linking(data)
52 check_duplicate_codes(data)
53 WurstImporter(name, data).write_database()

~\Anaconda3\envs\myenv\lib\site-packages\wurst\linking.py in check_internal_linking(data)
42 if exc.get("input") and exc["input"][0] in names:
43 if exc["input"] not in keys:
---> 44 raise InvalidLink(
45 "Exchange links to non-existent activity:\n{}".format(
46 pformat(exc)

InvalidLink: Exchange links to non-existent activity:
{'activity': 'e7fbcb70-eb35-4d42-8905-a18441378a0e',
'amount': 1.0,
'classifications': {'CPC': ['37430: Cement clinkers']},
'comment': 'EcoSpold01Location=CH',
'flow': '1f41586d-0d8a-4c7c-8473-dd8351bab538',
'input': ('ecoinvent_remind_SSP2-NPi_2043',
'5b97c4e3c5775588a48fa1233e6d8d22'),
'loc': 1.0,
'location': 'LAM',
'name': 'clinker production',
'product': 'clinker',
'production volume': 42822334344.0,
'properties': {'carbon allocation': {'amount': 0.0,
'comment': 'carbon content per unit of '
'product (reserved; not for '
'manual entry)',
'unit': 'kg'},
'carbon content': {'amount': 0.0,
'comment': 'carbon content on a dry matter '
'basis (reserved; not for manual '
'entry)',
'unit': 'dimensionless'},
'carbon content, fossil': {'amount': 0.0,
'comment': 'Tab 3.2, part II '
'ecoinvent v2.2 report 7',
'unit': 'dimensionless'},
'carbon content, non-fossil': {'amount': 0.0,
'comment': 'Tab 3.2, part II '
'ecoinvent v2.2 '
'report 7',
'unit': 'dimensionless'},
'dry mass': {'amount': 1.0, 'unit': 'kg'},
'price': {'amount': 0.047,
'comment': 'Calculated value based on data from '
'United Nations Commodity Trade '
'Statistics Database (comtrade.un.org). '
'UN comtrade category: 252310 Cement '
'Clinkers. Using exchange rate of 1EURO = '
'1.209 USD. Average of price of import '
'into 5 main markets (EU, US, JP, IN and '
'CN).',
'unit': 'EUR2005'},
'water content': {'amount': 0.0,
'comment': 'water mass/dry mass',
'unit': 'dimensionless'},
'water in wet mass': {'amount': 0.0, 'unit': 'kg'},
'wet mass': {'amount': 1.0, 'unit': 'kg'}},
'type': 'production',
'uncertainty type': 0,
'unit': 'kilogram'}

Uncertainty based on scenarios

@Loisel
Instead of producing one database per scenario, we could build one database, in which inputs values would be defined as uncertainty ranges based on worst to best REMIND scenario variables values.

Before building the database, one could weight the likelihood of the worst and best scenarios of becoming true ("I believe that there is 60% chance that the world goes to shit, 35% chance that we will follow a moderate path and 5% chance we comply with the Paris agreement targets", which will probably look like a log-normal or beta distribution). This could help us shape as distribution probability that we could apply to technology market shares, efficiencies, etc.

validation of params for NewDatabase

Once upon a time, a young and proud number was sent to the NewDatabase function.
The number was 3.

It all went well, until it didn't: activity data was gotten, exchange data was filled, global scope was set to datasets with missing location, scope of dataset was set to production exchanges with missing location, technosphere exchanges missing locations were corrected, empty exchanges were removed. But, oh misery, after starting to add Carma CCS inventories, with a worksheet extracted in less than a second, Traceback the mysterious appeared from the dark. It was his call. Traceback started to throw lines and lines of code, until she got tired, after throwing an IndexError.

The leaders from the dimension where 3 was coming, kept sending him, again and again, without success: the IndexError kept winning.

...

Centuries went through and finally, the leaders of another dimension sent a young brave number to replace the original warrior. The number was 3.6.
The sole presence of 3.6 was enough to prevent Traceback the mysterious from appearing, and finally, Carma CCS inventories were added, along with the fellow fossil carbon dioxide storage technologies, Biogas inventories, Electrolysis Hydrogen inventories, and even the last of the legion, methanol-based synthetic fuel inventories.

TL;DR:

I suggest NewDatabase also validates the actual source_version:

  • isinstance(source_versio, float)
  • source_version in [3.5, 3.6, 3.7]
>>> _ = NewDatabase(scenario = "SSP2-Base",
... year = 2028,
... source_db = "ecoinvent36-cutoff",
... source_version = 3)
Getting activity data
100%|██████████████████████████████████████| 18121/18121 [00:00<00:00, 95052.80it/s]
Adding exchange data to activities
100%|████████████████████████████████████| 615644/615644 [00:42<00:00, 14529.39it/s]
Filling out exchange data
100%|███████████████████████████████████████| 18121/18121 [00:03<00:00, 4717.18it/s]
Set missing location of datasets to global scope.
Set missing location of production exchanges to scope of dataset.
Correct missing location of technosphere exchanges.
Remove empty exchanges.
Add Carma CCS inventories
Extracted 1 worksheets in 0.48 seconds
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/lib/python3.8/site-packages/rmnd_lca/ecoinvent_modification.py", line 106, in __init__
    self.import_inventories(add_vehicles)
  File "/opt/conda/lib/python3.8/site-packages/rmnd_lca/ecoinvent_modification.py", line 122, in import_inventories
    carma.merge_inventory()
  File "/opt/conda/lib/python3.8/site-packages/rmnd_lca/inventory_imports.py", line 457, in merge_inventory
    self.prepare_inventory()
  File "/opt/conda/lib/python3.8/site-packages/rmnd_lca/inventory_imports.py", line 751, in prepare_inventory
    self.add_product_field_to_exchanges()
  File "/opt/conda/lib/python3.8/site-packages/rmnd_lca/inventory_imports.py", line 545, in add_product_field_to_exchanges
    y["product"] = self.correct_product_field(y)
  File "/opt/conda/lib/python3.8/site-packages/rmnd_lca/inventory_imports.py", line 587, in correct_product_field
    raise IndexError(
IndexError: An inventory exchange in Carma CCS cannot be linked to the biosphere or the ecoinvent database: {'name': 'market for water, completely softened, from decarbonised water, at user', 'amount': 0.006, 'location': 'GLO', 'unit': 'kilogram', 'categories': 'Materials/fuels', 'type': 'technosphere', 'uncertainty type': 2.0, 'loc': -5.115995809754082, 'scale': 0.1682361183106064, 'comment': '(4,5,3,2,3,3); estimate based on literature', 'negative': 0.0, 'simapro name': 'Water, completely softened, at plant/RER U'}

cut-off only?

Hi

the readme mentions support for ecoinvent 3.8 cut-off database. Does this mean that APOS/consequential databases does not work? And what would it require to support these?

0.4.0 package is missing dependency declaration on cryptography

Although code from the main branch here in github has cryptography in the requirements, the 0.4.0 package from pypi does not have it.

  • The wheel METADATA has:
Metadata-Version: 2.1
Name: premise
Version: 0.4.0
Summary: Coupling IAM output to ecoinvent LCA database ecoinvent for prospective LCA
Home-page: https://github.com/romainsacchi/premise
Author: Alois Dirnaichner <[email protected]>, Chris Mutel <[email protected]>, Tom Terlouw <[email protected]>, Romain Sacchi <[email protected]>
Author-email: UNKNOWN
License: BSD 3-Clause License
Requires-Dist: numpy
Requires-Dist: wurst (>=0.3)
Requires-Dist: bw2io (>=0.8)
Requires-Dist: pandas
Requires-Dist: bw2data
Requires-Dist: brightway2
Requires-Dist: xarray
Requires-Dist: carculator
Requires-Dist: carculator-truck
Requires-Dist: prettytable
Requires-Dist: pycountry
   install_requires=[
       'numpy',
       'wurst>=0.3',
       'bw2io>=0.8',
       'pandas',
       'bw2data',
       'brightway2',
       'xarray',
       'carculator',
       'carculator_truck',
       'prettytable',
       'pycountry'
   ],

I suggest @romainsacchi you build a new package and publish it to pypi so that the missing dependency is added.

`KeyError` when running default Premise with `update_cement()`

Fresh install of default branch of Premise (git head) against Brightway 2.5 gets a KeyError when update_cement() is running. Other transformations seem to work so far.

Error:

/////////////////// CEMENT ////////////////////

Data specific to the cement sector detected!


Start integration of cement data...

The validity of the datasets produced from the integration of the cement sector is not yet fully tested.
Consider the results with caution.

Log of deleted cement datasets saved in /Users/cmutel/Code/premise/premise/data/logs
Log of created cement datasets saved in /Users/cmutel/Code/premise/premise/data/logs

Create new clinker production datasets and delete old datasets
more than one locations possible for CH: ['NEU', 'NEN']

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
~/venvs/premise/lib/python3.9/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
   3360             try:
-> 3361                 return self._engine.get_loc(casted_key)
   3362             except KeyError as err:

~/venvs/premise/lib/python3.9/site-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

~/venvs/premise/lib/python3.9/site-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: 'ESC'

The above exception was the direct cause of the following exception:

KeyError                                  Traceback (most recent call last)
/var/folders/rn/ht0vvs3s7mz2h9f_xjt9x4040000gn/T/ipykernel_11603/1025987357.py in <module>
----> 1 ndb.update_all()

~/Code/premise/premise/ecoinvent_modification.py in update_all(self)
    797         self.update_electricity()
    798         self.update_solar_PV()
--> 799         self.update_cement()
    800         self.update_steel()
    801 

~/Code/premise/premise/ecoinvent_modification.py in update_cement(self)
    653                     )
    654 
--> 655                     scenario["database"] = cement.add_datasets_to_database()
    656 
    657             else:

~/Code/premise/premise/cement.py in add_datasets_to_database(self)
   1172         print("\nCreate new clinker production datasets and delete old datasets")
   1173         clinker_prod_datasets = [
-> 1174             d for d in self.build_clinker_production_datasets().values()
   1175         ]
   1176         self.db.extend(clinker_prod_datasets)

~/Code/premise/premise/cement.py in build_clinker_production_datasets(self)
    512 
    513             # Production volume by kiln type
--> 514             energy_input_per_kiln_type = self.iam_data.gnr_data.sel(
    515                 region=self.geo.iam_to_iam_region(k) if self.model == "image" else k,
    516                 variables=[

~/venvs/premise/lib/python3.9/site-packages/xarray/core/dataarray.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs)
   1252         Dimensions without coordinates: points
   1253         """
-> 1254         ds = self._to_temp_dataset().sel(
   1255             indexers=indexers,
   1256             drop=drop,

~/venvs/premise/lib/python3.9/site-packages/xarray/core/dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs)
   2228         """
   2229         indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel")
-> 2230         pos_indexers, new_indexes = remap_label_indexers(
   2231             self, indexers=indexers, method=method, tolerance=tolerance
   2232         )

~/venvs/premise/lib/python3.9/site-packages/xarray/core/coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs)
    414     }
    415 
--> 416     pos_indexers, new_indexes = indexing.remap_label_indexers(
    417         obj, v_indexers, method=method, tolerance=tolerance
    418     )

~/venvs/premise/lib/python3.9/site-packages/xarray/core/indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance)
    268             coords_dtype = data_obj.coords[dim].dtype
    269             label = maybe_cast_to_coords_dtype(label, coords_dtype)
--> 270             idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance)
    271             pos_indexers[dim] = idxr
    272             if new_idx is not None:

~/venvs/premise/lib/python3.9/site-packages/xarray/core/indexing.py in convert_label_indexer(index, label, index_name, method, tolerance)
    189                 indexer = index.get_loc(label_value)
    190             else:
--> 191                 indexer = index.get_loc(label_value, method=method, tolerance=tolerance)
    192         elif label.dtype.kind == "b":
    193             indexer = label

~/venvs/premise/lib/python3.9/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
   3361                 return self._engine.get_loc(casted_key)
   3362             except KeyError as err:
-> 3363                 raise KeyError(key) from err
   3364 
   3365         if is_scalar(key) and isna(key) and not self.hasnans:

KeyError: 'ESC'

Issue extracting IAM data

I just tried using premise 1.2.0 and 1.2.3, and got the following error message setting up new databases. In particular, it fails in the step of extracting IAM data:

Traceback (most recent call last):
File "C:\miniconda3_py37\envs\materials\lib\site-packages\IPython\core\interactiveshell.py", line 3398, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "C:\Users\zhang_x\AppData\Local\Temp\ipykernel_3292\3795149781.py", line 1, in <cell line: 1>
ndb = NewDatabase(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\premise\ecoinvent_modification.py", line 564, in init
# build file path
File "C:\miniconda3_py37\envs\materials\lib\site-packages\premise\data_collection.py", line 193, in init
File "C:\miniconda3_py37\envs\materials\lib\site-packages\premise\data_collection.py", line 1329, in __get_carbon_capture_rate
data_to_return.coords["variables"] = list(labels)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\xarray\core\dataarray.py", line 199, in getitem
return self.data_array.sel(key)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\xarray\core\dataarray.py", line 1329, in sel
ds = self._to_temp_dataset().sel(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\xarray\core\dataset.py", line 2501, in sel
pos_indexers, new_indexes = remap_label_indexers(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\xarray\core\coordinates.py", line 421, in remap_label_indexers
pos_indexers, new_indexes = indexing.remap_label_indexers(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\xarray\core\indexing.py", line 121, in remap_label_indexers
idxr, new_idx = index.query(labels, method=method, tolerance=tolerance)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\xarray\core\indexes.py", line 245, in query
indexer = get_indexer_nd(self.index, label, method, tolerance)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\xarray\core\indexes.py", line 142, in get_indexer_nd
flat_indexer = index.get_indexer(flat_labels, method=method, tolerance=tolerance)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\pandas\core\indexes\base.py", line 3784, in get_indexer
return self._get_indexer(target, method, limit, tolerance)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\pandas\core\indexes\base.py", line 3809, in _get_indexer
indexer = self._engine.get_indexer(tgt_values)
File "pandas_libs\index.pyx", line 305, in pandas._libs.index.IndexEngine.get_indexer
File "pandas_libs\hashtable_class_helper.pxi", line 5247, in pandas._libs.hashtable.PyObjectHashTable.lookup
TypeError: unhashable type: 'list'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\miniconda3_py37\envs\materials\lib\site-packages\IPython\core\interactiveshell.py", line 1993, in showtraceback
stb = self.InteractiveTB.structured_traceback(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\IPython\core\ultratb.py", line 1118, in structured_traceback
return FormattedTB.structured_traceback(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\IPython\core\ultratb.py", line 1012, in structured_traceback
return VerboseTB.structured_traceback(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\IPython\core\ultratb.py", line 865, in structured_traceback
formatted_exception = self.format_exception_as_a_whole(etype, evalue, etb, number_of_lines_of_context,
File "C:\miniconda3_py37\envs\materials\lib\site-packages\IPython\core\ultratb.py", line 818, in format_exception_as_a_whole
frames.append(self.format_record(r))
File "C:\miniconda3_py37\envs\materials\lib\site-packages\IPython\core\ultratb.py", line 736, in format_record
result += ''.join(_format_traceback_lines(frame_info.lines, Colors, self.has_colors, lvals))
File "C:\miniconda3_py37\envs\materials\lib\site-packages\stack_data\utils.py", line 145, in cached_property_wrapper
value = obj.dict[self.func.name] = self.func(obj)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\stack_data\core.py", line 698, in lines
pieces = self.included_pieces
File "C:\miniconda3_py37\envs\materials\lib\site-packages\stack_data\utils.py", line 145, in cached_property_wrapper
value = obj.dict[self.func.name] = self.func(obj)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\stack_data\core.py", line 649, in included_pieces
pos = scope_pieces.index(self.executing_piece)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\stack_data\utils.py", line 145, in cached_property_wrapper
value = obj.dict[self.func.name] = self.func(obj)
File "C:\miniconda3_py37\envs\materials\lib\site-packages\stack_data\core.py", line 628, in executing_piece
return only(
File "C:\miniconda3_py37\envs\materials\lib\site-packages\executing\executing.py", line 164, in only
raise NotOneValueFound('Expected one value, found 0')
executing.executing.NotOneValueFound: Expected one value, found 0

Unable to import premise

Hi,

I am unable to import premise. Please find the screenshot attached. Can someone please help?

Abdur
Capture

EncodingError

Hi, I started using premise codes using jupyter lab, and at the moment I am trying to create a NewDatabase by extracting a 3.6 ecoinvent database in ecospold format. However, the following error appears when Importing Default inventories:

MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x7fedc3aafbe0>'. Reason: 'TypeError("cannot pickle 'lxml.etree._ListErrorLog' object")'. Do you know how this problem could be solved? Thanks a lot.


MaybeEncodingError Traceback (most recent call last)
in
----> 1 ndb = NewDatabase(
2 scenarios = [
3 {"model":"remind", "pathway":"SSP2-NDC", "year":2030}
4 ],
5 source_type="ecospold",

~/.conda/envs/my3.9env/lib/python3.9/site-packages/premise/ecoinvent_modification.py in init(self, scenarios, source_version, source_type, key, source_db, source_file_path, additional_inventories, direct_import)
467 "\n////////////////////// EXTRACTING SOURCE DATABASE ///////////////////////"
468 )
--> 469 self.db = self.clean_database()
470 print(
471 "\n/////////////////// IMPORTING DEFAULT INVENTORIES ////////////////////"

~/.conda/envs/my3.9env/lib/python3.9/site-packages/premise/ecoinvent_modification.py in clean_database(self)
489 :return:
490 """
--> 491 return DatabaseCleaner(
492 self.source, self.source_type, self.source_file_path
493 ).prepare_datasets()

~/.conda/envs/my3.9env/lib/python3.9/site-packages/premise/clean_datasets.py in init(self, source_db, source_type, source_file_path)
36 if source_type == 'ecospold':
37 # The ecospold data needs to be formatted
---> 38 ei = bw2io.SingleOutputEcospold2Importer(source_file_path, source_db)
39 ei.apply_strategies()
40 self.db = ei.data

~/.conda/envs/my3.9env/lib/python3.9/site-packages/bw2io/importers/ecospold2.py in init(self, dirpath, db_name, extractor, use_mp, signal)
69 start = time()
70 try:
---> 71 self.data = extractor.extract(dirpath, db_name, use_mp=use_mp)
72 except RuntimeError as e:
73 raise MultiprocessingError(

~/.conda/envs/my3.9env/lib/python3.9/site-packages/bw2io/extractors/ecospold2.py in extract(cls, dirpath, db_name, use_mp)
85 for x in filelist
86 ]
---> 87 data = [p.get() for p in results]
88 else:
89 pbar = pyprind.ProgBar(

~/.conda/envs/my3.9env/lib/python3.9/site-packages/bw2io/extractors/ecospold2.py in (.0)
85 for x in filelist
86 ]
---> 87 data = [p.get() for p in results]
88 else:
89 pbar = pyprind.ProgBar(

~/.conda/envs/my3.9env/lib/python3.9/multiprocessing/pool.py in get(self, timeout)
769 return self._value
770 else:
--> 771 raise self._value
772
773 def _set(self, i, obj):

MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x7fedc3aafbe0>'. Reason: 'TypeError("cannot pickle 'lxml.etree._ListErrorLog' object")'

geomatcher - add an option to choose the model resolution

Currently, premise loads the geomatcher from wurst, which already has predefined model regions for IMAGE and REMIND.
However, there are now two possible spatial resolutions for REMIND, and we could expect that there would be more possibilities in the future.
How should we deal with that?

  1. we could edit wurst and add the option there
  2. we could shift the addition of the regions to premise instead of doing it in wurst. This means that wurst/wurst/geo.py gets a bit shorter, and we only add the region definitions in premise/premise/geomap.py

I tried leaving wurst as it is, and adding the new region definitions to premise, but I got warnings about duplicate definitions.
How do you think we should proceed?

superstructure write to BW: ValueError: This sheet is too large!

when I run ndb.write_superstructure_db_to_brightway() I get the following error
Freshly installed env with premise 1.08

Prepare database 1.
Prepare database 2.
Prepare database 3.
Looping through scenarios to detect changes...
Export a scenario difference file.
Dropped 852398 duplicates.
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [18], in <cell line: 1>()
----> 1 ndb.write_superstructure_db_to_brightway()

File ~\Anaconda3\envs\premise\lib\site-packages\premise\ecoinvent_modification.py:952, in NewDatabase.write_superstructure_db_to_brightway(self, name, filepath)
    949     print(f"Prepare database {scen + 1}.")
    950     scenario["database"] = self.prepare_db_for_export(scenario)
--> 952 self.database = build_superstructure_db(
    953     self.database, self.scenarios, db_name=name, fp=filepath
    954 )
    956 print("Done!")
    958 self.database = check_for_duplicates(self.database)

File ~\Anaconda3\envs\premise\lib\site-packages\premise\utils.py:440, in build_superstructure_db(origin_db, scenarios, db_name, fp)
    437 after = len(df)
    438 print(f"Dropped {before - after} duplicates.")
--> 440 df.to_excel(filepath, index=False)
    442 print(f"Scenario difference file exported to {filepath}!")
    444 list_modified_acts = list(
    445     set([e[0] for e, v in modified.items() if v["original"] == 0])
    446 )

File ~\Anaconda3\envs\premise\lib\site-packages\pandas\core\generic.py:2357, in NDFrame.to_excel(self, excel_writer, sheet_name, na_rep, float_format, columns, header, index, index_label, startrow, startcol, engine, merge_cells, encoding, inf_rep, verbose, freeze_panes, storage_options)
   2344 from pandas.io.formats.excel import ExcelFormatter
   2346 formatter = ExcelFormatter(
   2347     df,
   2348     na_rep=na_rep,
   (...)
   2355     inf_rep=inf_rep,
   2356 )
-> 2357 formatter.write(
   2358     excel_writer,
   2359     sheet_name=sheet_name,
   2360     startrow=startrow,
   2361     startcol=startcol,
   2362     freeze_panes=freeze_panes,
   2363     engine=engine,
   2364     storage_options=storage_options,
   2365 )

File ~\Anaconda3\envs\premise\lib\site-packages\pandas\io\formats\excel.py:875, in ExcelFormatter.write(self, writer, sheet_name, startrow, startcol, freeze_panes, engine, storage_options)
    873 num_rows, num_cols = self.df.shape
    874 if num_rows > self.max_rows or num_cols > self.max_cols:
--> 875     raise ValueError(
    876         f"This sheet is too large! Your sheet size is: {num_rows}, {num_cols} "
    877         f"Max sheet size is: {self.max_rows}, {self.max_cols}"
    878     )
    880 formatted_cells = self.get_formatted_cells()
    881 if isinstance(writer, ExcelWriter):

ValueError: This sheet is too large! Your sheet size is: 2959412, 16 Max sheet size is: 1048576, 16384

requirements include testing packages

AS with issue #8 , the requirements in the setup.py include coveralls and pytest.
I suggest you remove the testing packages in the requirements to slim the amount of packages necessary when installing rmnd-lca.

Superstructure scenario-diff-file: first row is empty

The first row in the scenario difference file for the superstructure is just empty, the actual headers are in the second row, while they should be in the second row:

grafik

The activity browser then throws an error as it cannot find the right headers in the first row of the excel file.

Current workaround: the user has to delete the first row manually in the excel file.

support of 3.7.1 (and semantic versionning tags of ecoinvent 3)

#16 hit back: ecoinvent version 3.7.1 is out and this cannot be coerced to a "float".

Current

The NewDatabase validates the version of Ecoinvent as a "float", but the latest version (of ecoinvent) is a semantic versionning tag: "3.7.1"

AT this point, I think this is more of a "discussion" rather than a formal request for change in the code: Is premise supposed to support "generic" major versions of ecoinvent 3.7, 3.6 and so on ?
ON the other hand, ecoinvent release "3.7.1" and as usual, it tells that we should prefer 3.7.1 over 3.7 !

What do you think ? @romainsacchi

Preserve national markets for electricity, and make they point to regional market groups

Currently, national market datatsets are deleted to avoid confusion for the user.
However, there are some issue related to this, especially as scenarios are merged into a superstructure database.
One of the issues is that user models have to be modified to link to prospective databases since some markets do no longer exists.

I guess this should apply to all sectors, not just electricity...

Documentation about remind output files directory

The documentation mentions:

(in the Requirements section)

REMIND IAM output files come with the library ("xxx.mif" and "GAINS emission factors.csv") and are located by default in the subdirectory "/data/Remind output files". A file path can be specified to fetch the REMIND IAM output files elsewhere on your computer.

(in the How to use it? section)

Note that, by default, the library will look for REMIND output files ("xxx.mif" files and "GAINS emission factors.csv") in the "data/Remind output files" subdirectory. If those are not located there, you need to specify the path to the correct directory, as such::

but the actual location inside the rmnd-lca package is:

data/remind_output_files/

This is a minor cosmetic issue, but I see that the ".gitignore" file

Pruning scenario output files

The current scenario output files are rather large, and I think we can prune quite some rows automagically.

Consider for example the (top of the) below file:
image
All values except the last row are the same for all scenario options. In that case we don't need to store them in the file.

Assuming the file I'm looking at here is representative for the average scenario file generated by Premise, we could drop ~200k of the ~500k lines (in this exact case 208467 of the 510172 lines). Assuming this scales linearly, dropping ~40% of the data could reduce file size by ~40%. Reducing file size can speed up imports of these files into AB, I think many users would be quite happy with 40% faster loading times.

Assuming you're exporting through a Pandas DF (haven't checked your code), writing something that checks for non-changed lines should not be too difficult.

Error with SimaPro export

Hello,

I generated a version of ecoinvent SSP2-RCP19 and could download through brightway2 in activity-browser without any issue, but when I tried to export to SimaPro I get an error.

image

In case it's useful, I integrated IAM projections within my database though .update_all()

I'm using premise v1.1.4 & brightway2.2.4

I'll update to premise 1.1.7 to check if the bug still occurs and edit my post.

release 0.2.0 has missing `additional_inventories` directory

Hi,

I'm not sure is this is true issue, but when using premise 0.2.0, I get an error when building a new Database:

The code we use :

             newdb = NewDatabase(                                                                    
                 scenarios=[                                                                         
                     {                                                                               
                         "model": "remind",                                                          
                         "pathway": combination["scenario"],                                         
                         "year": int(combination["year"]),                                           
                     }                                                                               
                 ],                                                                                  
                 source_db=combination["source_db"],                                                 
                 source_version=str(combination["source_version"]),                                  
             )  

yields:

    newdb = NewDatabase(                            
  File "/opt/conda/lib/python3.8/site-packages/premise/ecoinvent_modification.py", line 423, in __init__ 
    self.import_inventories()                       
  File "/opt/conda/lib/python3.8/site-packages/premise/ecoinvent_modification.py", line 457, in import_inventories
    carma = CarmaCCSInventory(self.db, self.version, file)                                              
  File "/opt/conda/lib/python3.8/site-packages/premise/inventory_imports.py", line 964, in __init__     
    super().__init__(database, version, path)                                                           
  File "/opt/conda/lib/python3.8/site-packages/premise/inventory_imports.py", line 667, in __init__     
    raise FileNotFoundError(                        
FileNotFoundError: The inventory file /opt/conda/lib/python3.8/site-packages/premise/data/additional_inventories/lci-Carma-CCS.xlsx could not be found.

When I take a look at the package at pypi [both, the wheel and the tar.gz] the "additional inventories" directory is absent from the package, but maybe this is intended ;)

build and run requirements for conda package are identical

Currently, the requirements to build the coda package, for running and testing are the same.

requirements:
  build:
    - python
    - wurst
    - numpy
    - pandas
    - bw2io
    - bw2data
    - xarray <=0.13.0
    - pytest
    - pytest-cov
    - coveralls
    - setuptools
  run:
    - python
    - wurst
    - numpy
    - pandas
    - bw2io
    - bw2data
    - xarray <=0.13.0
    - pytest
    - pytest-cov
    - coveralls

I suggest you remove the "testing" packages for the running environment, this would prevent conda from installing pytest for example when all you want is to run rmnd-lca, and trust that the tests are "passing".

Missing folders when installing via pip

In ...\premise\data, the three folders 'additional_inventories', 'iam_output_files' and 'GAINS_emission_factors' seem to be missing when installing via pip install premise.

Create technology-specific electricity networks

It would be useful to create electricity networks (from high to low voltage) for single technologies.
This is relevant for sensitivity tests (e.g., a BEV car running solely on hard coal).

KeyError when creating NDC databases

Hello, I am running the most updated version of premise. When I try to create SSP2-NDC databases, an error appears when extracting the IAM Data (as shown below). I did not have any issues when creating databases for Base, PkBudg, etc using ecoinvent 3.6. Could you please help me to solve this? Thanks again!

//////////////////// EXTRACTING SOURCE DATABASE ////////////////////
Done!

////////////////// IMPORTING DEFAULT INVENTORIES ///////////////////
Done!

/////////////////////// EXTRACTING IAM DATA ////////////////////////

KeyError Traceback (most recent call last)
Input In [17], in
3 from premise import *
4 import brightway2 as bw
----> 6 ndb = NewDatabase(
7 scenarios=[
8 {"model":"remind", "pathway":"SSP2-NDC", "year":2050}
9 ],
10 source_db="ecoinvent_3.6cutoff", # <-- name of the database in the BW2 project. Must be a string.
11 source_version="3.6", # <-- version of ecoinvent. Can be "3.5", "3.6", "3.7" or "3.7.1". Must be a string.
12 key='' # <-- decryption key
13 # to be requested from the library maintainers if you want ot use default scenarios included in premise
14 )
15
16 ndb.update_all()

File ~\Miniconda3\envs\bw2\lib\site-packages\premise\ecoinvent_modification.py:549, in NewDatabase.init(self, scenarios, source_version, source_type, key, source_db, source_file_path, additional_inventories, system_model, time_horizon, use_cached_inventories, use_cached_database)
546 print("\n/////////////////////// EXTRACTING IAM DATA ////////////////////////")
548 for scenario in self.scenarios:
--> 549 scenario["external data"] = IAMDataCollection(
550 model=scenario["model"],
551 pathway=scenario["pathway"],
552 year=scenario["year"],
553 filepath_iam_files=scenario["filepath"],
554 key=key,
555 system_model=self.system_model,
556 time_horizon=self.time_horizon,
557 )
558 scenario["database"] = copy.deepcopy(self.database)
560 print("Done!")

File ~\Miniconda3\envs\bw2\lib\site-packages\premise\data_collection.py:173, in IAMDataCollection.init(self, model, pathway, year, filepath_iam_files, key, system_model, time_horizon)
170 self.gnr_data = get_gnr_data()
172 self.electricity_markets = self.__get_iam_electricity_markets(data=data)
--> 173 self.fuel_markets = self.__get_iam_fuel_markets(data=data)
175 prod_vars = self.__get_iam_variable_labels(IAM_ELEC_VARS, key="iam_aliases")
176 prod_vars.update(
177 self.__get_iam_variable_labels(IAM_FUELS_VARS, key="iam_aliases")
178 )

File ~\Miniconda3\envs\bw2\lib\site-packages\premise\data_collection.py:948, in IAMDataCollection.__get_iam_fuel_markets(self, data)
939 raise KeyError(
940 f"{self.year} is outside of the boundaries "
941 f"of the IAM file: {data.year.values.min()}-{data.year.values.max()}"
942 )
944 # Finally, if the specified year falls in between two periods provided by the IAM
945 # sometimes, the World region is either neglected
946 # or wrongly evaluated so we fix that here
--> 948 data.loc[dict(region="World", variables=list_technologies)] = data.loc[
949 dict(
950 region=[r for r in data.coords["region"].values if r != "World"],
951 variables=list_technologies,
952 )
953 ].sum(dim="region")
955 # Interpolation between two periods
956 data_to_return = data.loc[:, list_technologies, :]

File ~\Miniconda3\envs\bw2\lib\site-packages\xarray\core\dataarray.py:198, in _LocIndexer.getitem(self, key)
196 labels = indexing.expanded_indexer(key, self.data_array.ndim)
197 key = dict(zip(self.data_array.dims, labels))
--> 198 return self.data_array.sel(key)

File ~\Miniconda3\envs\bw2\lib\site-packages\xarray\core\dataarray.py:1328, in DataArray.sel(self, indexers, method, tolerance, drop, **indexers_kwargs)
1219 def sel(
1220 self,
1221 indexers: Mapping[Any, Any] = None,
(...)
1225 **indexers_kwargs: Any,
1226 ) -> DataArray:
1227 """Return a new DataArray whose data is given by selecting index
1228 labels along the specified dimension(s).
1229
(...)
1326 Dimensions without coordinates: points
1327 """
-> 1328 ds = self._to_temp_dataset().sel(
1329 indexers=indexers,
1330 drop=drop,
1331 method=method,
1332 tolerance=tolerance,
1333 **indexers_kwargs,
1334 )
1335 return self._from_temp_dataset(ds)

File ~\Miniconda3\envs\bw2\lib\site-packages\xarray\core\dataset.py:2500, in Dataset.sel(self, indexers, method, tolerance, drop, **indexers_kwargs)
2439 """Returns a new dataset with each array indexed by tick labels
2440 along the specified dimension(s).
2441
(...)
2497 DataArray.sel
2498 """
2499 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel")
-> 2500 pos_indexers, new_indexes = remap_label_indexers(
2501 self, indexers=indexers, method=method, tolerance=tolerance
2502 )
2503 # TODO: benbovy - flexible indexes: also use variables returned by Index.query
2504 # (temporary dirty fix).
2505 new_indexes = {k: v[0] for k, v in new_indexes.items()}

File ~\Miniconda3\envs\bw2\lib\site-packages\xarray\core\coordinates.py:421, in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs)
414 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "remap_label_indexers")
416 v_indexers = {
417 k: v.variable.data if isinstance(v, DataArray) else v
418 for k, v in indexers.items()
419 }
--> 421 pos_indexers, new_indexes = indexing.remap_label_indexers(
422 obj, v_indexers, method=method, tolerance=tolerance
423 )
424 # attach indexer's coordinate to pos_indexers
425 for k, v in indexers.items():

File ~\Miniconda3\envs\bw2\lib\site-packages\xarray\core\indexing.py:121, in remap_label_indexers(data_obj, indexers, method, tolerance)
119 for dim, index in indexes.items():
120 labels = grouped_indexers[dim]
--> 121 idxr, new_idx = index.query(labels, method=method, tolerance=tolerance)
122 pos_indexers[dim] = idxr
123 if new_idx is not None:

File ~\Miniconda3\envs\bw2\lib\site-packages\xarray\core\indexes.py:247, in PandasIndex.query(self, labels, method, tolerance)
245 indexer = get_indexer_nd(self.index, label, method, tolerance)
246 if np.any(indexer < 0):
--> 247 raise KeyError(f"not all values found in index {coord_name!r}")
249 return indexer, None

KeyError: "not all values found in index 'variables'"

0.4 (and potentially 1.0) versions are not compatible with python 3.8

Current

A python 3.8 environment allows to install with pip only dependencies premise>=0.4,<1.0.0, but the tests won't pass.

Expedted

If premise is intended to be used with python>=3.9 only, then it should declare it so that it cannot be installed in python<3.9.

> pytest -k ecoinvent                                                                         
============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.8.12, pytest-7.0.1, pluggy-1.0.0
rootdir: /home/polka-dot/premise, configfile: pytest.ini, testpaths: tests
plugins: cov-3.0.0
collected 5 items / 4 errors / 2 deselected                                                                                                                                                                      

===================================================================================================== ERRORS =====================================================================================================
__________________________________________________________________________________ ERROR collecting tests/test_activity_maps.py __________________________________________________________________________________
tests/test_activity_maps.py:2: in <module>
from premise.activity_maps import InventorySet
premise/__init__.py:9: in <module>
from .ecoinvent_modification import NewDatabase
premise/ecoinvent_modification.py:18: in <module>
from .inventory_imports import (
premise/inventory_imports.py:7: in <module>
import carculator
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/__init__.py:42: in <module>
from .model import CarModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/model.py:9: in <module>
from .energy_consumption import EnergyConsumptionModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:24: in <module>
class EnergyConsumptionModel:
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:135: in EnergyConsumptionModel
) -> tuple[Union[float, Any], Any, Union[float, Any]]:
E   TypeError: 'type' object is not subscriptable
______________________________________________________________________________________ ERROR collecting tests/test_cars.py _______________________________________________________________________________________
tests/test_cars.py:8: in <module>
from premise import DATA_DIR, NewDatabase
premise/__init__.py:9: in <module>
from .ecoinvent_modification import NewDatabase
premise/ecoinvent_modification.py:18: in <module>
from .inventory_imports import (
premise/inventory_imports.py:7: in <module>
import carculator
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/__init__.py:42: in <module>
from .model import CarModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/model.py:9: in <module>
from .energy_consumption import EnergyConsumptionModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:24: in <module>
class EnergyConsumptionModel:
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:135: in EnergyConsumptionModel
) -> tuple[Union[float, Any], Any, Union[float, Any]]:
E   TypeError: 'type' object is not subscriptable
___________________________________________________________________________________ ERROR collecting tests/test_electricity.py ___________________________________________________________________________________
tests/test_electricity.py:4: in <module>
from premise import DATA_DIR
premise/__init__.py:9: in <module>
from .ecoinvent_modification import NewDatabase
premise/ecoinvent_modification.py:18: in <module>
from .inventory_imports import (
premise/inventory_imports.py:7: in <module>
import carculator
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/__init__.py:42: in <module>
from .model import CarModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/model.py:9: in <module>
from .energy_consumption import EnergyConsumptionModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:24: in <module>
class EnergyConsumptionModel:
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:135: in EnergyConsumptionModel
) -> tuple[Union[float, Any], Any, Union[float, Any]]:
E   TypeError: 'type' object is not subscriptable
_______________________________________________________________________________ ERROR collecting tests/test_import_inventories.py ________________________________________________________________________________
tests/test_import_inventories.py:6: in <module>
from premise import DATA_DIR, INVENTORY_DIR
premise/__init__.py:9: in <module>
from .ecoinvent_modification import NewDatabase
premise/ecoinvent_modification.py:18: in <module>
from .inventory_imports import (
premise/inventory_imports.py:7: in <module>
import carculator
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/__init__.py:42: in <module>
from .model import CarModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/model.py:9: in <module>
from .energy_consumption import EnergyConsumptionModel
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:24: in <module>
class EnergyConsumptionModel:
../../../miniconda3/envs/premise-dev/lib/python3.8/site-packages/carculator/energy_consumption.py:135: in EnergyConsumptionModel
) -> tuple[Union[float, Any], Any, Union[float, Any]]:
E   TypeError: 'type' object is not subscriptable

This is the python 3.9 environment I used to test:

Package Version


  • appdirs 1.4.4
  • asteval 0.9.26
  • astunparse 1.6.3
  • attrs 21.4.0
  • brightway2 2.4.2
  • bw-migrations 0.2
  • bw2analyzer 0.10
  • bw2calc 1.8.1
  • bw2data 3.6.4
  • bw2io 0.8.6
  • bw2parameters 0.7
  • carculator 1.6.5
  • carculator-truck 0.2.9
  • certifi 2021.10.8
  • cffi 1.15.0
  • charset-normalizer 2.0.12
  • click 8.0.4
  • constructive-geometries 0.7
  • country-converter 0.7.4
  • coverage 6.3.2
  • cryptography 36.0.1
  • cycler 0.11.0
  • docopt 0.6.2
  • eight 1.0.1
  • et-xmlfile 1.1.0
  • fasteners 0.17.3
  • Flask 2.0.3
  • fonttools 4.29.1
  • future 0.18.2
  • idna 3.3
  • iniconfig 1.1.1
  • intel-openmp 2022.0.2
  • itsdangerous 2.1.0
  • Jinja2 3.0.3
  • kiwisolver 1.3.2
  • klausen 0.1.1
  • lxml 4.8.0
  • MarkupSafe 2.1.0
  • matplotlib 3.5.1
  • mkl 2022.0.2
  • mrio-common-metadata 0.2
  • numexpr 2.8.1
  • numpy 1.22.2
  • openpyxl 3.0.9
  • packaging 21.3
  • pandas 1.4.1
  • pathlib 1.0.1
  • peewee 3.14.9
  • Pillow 9.0.1
  • pip 21.2.4
  • pluggy 1.0.0
  • premise 1.0.3
  • premise-gwp 0.5
  • prettytable 3.2.0
  • psutil 5.9.0
  • py 1.11.0
  • pycountry 22.3.5
  • pycparser 2.21
  • pypardiso 0.4.0
  • pyparsing 3.0.7
  • PyPrind 2.11.3
  • pytest 7.0.1
  • pytest-cov 3.0.0
  • python-dateutil 2.8.2
  • python-json-logger 2.0.2
  • pytz 2021.3
  • pyxlsb 1.0.9
  • PyYAML 6.0
  • requests 2.27.1
  • scipy 1.7.0
  • setuptools 58.0.4
  • six 1.16.0
  • stats-arrays 0.6.5
  • tabulate 0.8.9
  • tbb 2021.5.1
  • tomli 2.0.1
  • toolz 0.11.2
  • tqdm 4.63.0
  • unicodecsv 0.14.1
  • Unidecode 1.3.3
  • urllib3 1.26.8
  • voluptuous 0.12.2
  • wcwidth 0.2.5
  • Werkzeug 2.0.3
  • wheel 0.37.1
  • Whoosh 2.7.4
  • wrapt 1.13.3
  • wurst 0.3
  • xarray 0.17.0
  • xlrd 2.0.1
  • XlsxWriter 3.0.3

fix: export to superstructure

Got an error when exporting ecoinvent 3.8 cutoff to a superstructure database: rows exceed allowed max.

fix:

Anaconda3\envs\PREMISE\Lib\site-packages\pandas\io\formats\excel.py
line 472

increase the value for max rows

Update_electricity() does not work for remind scenarios

Update_electricity() does not work for remind scenarios (I imported only 1 remind scenario). It produces the following error:

+------------------------------------------------------------------+
| Warning |
+------------------------------------------------------------------+
| Because some of the scenarios can yield LCI databases |
| containing net negative emission technologies (NET), |
| it is advised to account for biogenic CO2 flows when calculating |
| Global Warming potential indicators. |
| premise_gwp provides characterization factors for such flows. |
| |
| Install it via |
| pip install premise_gwp |
| or |
| conda install -c romainsacchi premise_gwp |
| |
| Within your bw2 project: |
| from premise_gwp import add_premise_gwp |
| add_premise_gwp() |
+------------------------------------------------------------------+

////////////////////// EXTRACTING SOURCE DATABASE //////////////////
Done!

/////////////////// IMPORTING DEFAULT INVENTORIES //////////////////
Done!

////////////////////// EXTRACTING IAM DATA /////////////////////////
Done!

/////////////////// ELECTRICITY ////////////////////
More than one region found for GR:['EUR', 'ESC']
More than one region found for RO:['EUR', 'ECS']
More than one region found for CH:['NEU', 'NEN']
More than one region found for BE:['EUR', 'EWN']
More than one region found for DE:['EUR', 'DEU']
More than one region found for HU:['EUR', 'ECS']
More than one region found for SE:['EUR', 'ENC']
More than one region found for CY:['EUR', 'ESC']
More than one region found for AT:['EUR', 'EWN']
More than one region found for DK:['EUR', 'ENC']
More than one region found for PT:['EUR', 'ESW']
More than one region found for LT:['EUR', 'ECE']
More than one region found for FR:['EUR', 'FRA']
More than one region found for IT:['EUR', 'ESC']
More than one region found for NL:['EUR', 'EWN']
More than one region found for HR:['EUR', 'ECS']
More than one region found for ES:['EUR', 'ESW']
More than one region found for SK:['EUR', 'ECE']
More than one region found for MK:['NEU', 'NES']
More than one region found for CZ:['EUR', 'ECE']
More than one region found for NO:['NEU', 'NEN']
More than one region found for PL:['EUR', 'ECE']
More than one region found for RS:['NEU', 'NES']
More than one region found for GB:['EUR', 'UKI']
More than one region found for IS:['NEU', 'NEN']
More than one region found for BA:['NEU', 'NES']
More than one region found for BG:['EUR', 'ECS']
More than one region found for IE:['EUR', 'UKI']
More than one region found for MT:['EUR', 'ESC']
More than one region found for XK:['EUR', 'NES']
More than one region found for FI:['EUR', 'ENC']
More than one region found for LU:['EUR', 'EWN']
More than one region found for EE:['EUR', 'ECE']
More than one region found for SI:['EUR', 'ECS']
More than one region found for TR:['MEA', 'NES']
More than one region found for AL:['NEU', 'NES']
More than one region found for LV:['EUR', 'ECE']
More than one region found for ME:['NEU', 'NES']
More than one region found for GI:['EUR', 'UKI']
Adjust efficiency of power plants...
Log of changes in power plants efficiencies saved in
Rescale inventories and emissions for Biomass CHP


KeyError Traceback (most recent call last)
~.conda\envs\premiseenv\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3360 try:
-> 3361 return self._engine.get_loc(casted_key)
3362 except KeyError as err:

~.conda\envs\premiseenv\lib\site-packages\pandas_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()

~.conda\envs\premiseenv\lib\site-packages\pandas_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: None

The above exception was the direct cause of the following exception:

KeyError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_17096/2751985968.py in
16 )
17
---> 18 ndb.update_electricity()
19
20 #ndb.write_db_to_brightway(name="test_elec3")

c:\users\harp_ca\pycharmprojects\premise\premise\premise\ecoinvent_modification.py in update_electricity(self)
848 )
849 # scenario["database"] = electricity.update_electricity_markets()
--> 850 self.database = electricity.update_electricity_efficiency()
851
852 def update_fuels(self):

c:\users\harp_ca\pycharmprojects\premise\premise\premise\electricity.py in update_electricity_efficiency(self)
1091
1092 # Find relative efficiency change indicated by the IAM
-> 1093 scaling_factor = 1 / dict_technology["IAM_eff_func"](
1094 variable=technology,
1095 location=loc,

c:\users\harp_ca\pycharmprojects\premise\premise\premise\transformation.py in find_iam_efficiency_change(self, variable, location, year, scenario)
466 """
467
--> 468 scaling_factor = self.iam_data.efficiency.sel(
469 region=location, variables=variable, year=int(year), scenario=scenario
470 ).values.item(0)

~.conda\envs\premiseenv\lib\site-packages\xarray\core\dataarray.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs)
1252 Dimensions without coordinates: points
1253 """
-> 1254 ds = self._to_temp_dataset().sel(
1255 indexers=indexers,
1256 drop=drop,

~.conda\envs\premiseenv\lib\site-packages\xarray\core\dataset.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs)
2228 """
2229 indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "sel")
-> 2230 pos_indexers, new_indexes = remap_label_indexers(
2231 self, indexers=indexers, method=method, tolerance=tolerance
2232 )

~.conda\envs\premiseenv\lib\site-packages\xarray\core\coordinates.py in remap_label_indexers(obj, indexers, method, tolerance, **indexers_kwargs)
414 }
415
--> 416 pos_indexers, new_indexes = indexing.remap_label_indexers(
417 obj, v_indexers, method=method, tolerance=tolerance
418 )

~.conda\envs\premiseenv\lib\site-packages\xarray\core\indexing.py in remap_label_indexers(data_obj, indexers, method, tolerance)
268 coords_dtype = data_obj.coords[dim].dtype
269 label = maybe_cast_to_coords_dtype(label, coords_dtype)
--> 270 idxr, new_idx = convert_label_indexer(index, label, dim, method, tolerance)
271 pos_indexers[dim] = idxr
272 if new_idx is not None:

~.conda\envs\premiseenv\lib\site-packages\xarray\core\indexing.py in convert_label_indexer(index, label, index_name, method, tolerance)
189 indexer = index.get_loc(label_value)
190 else:
--> 191 indexer = index.get_loc(label_value, method=method, tolerance=tolerance)
192 elif label.dtype.kind == "b":
193 indexer = label

~.conda\envs\premiseenv\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3361 return self._engine.get_loc(casted_key)
3362 except KeyError as err:
-> 3363 raise KeyError(key) from err
3364
3365 if is_scalar(key) and isna(key) and not self.hasnans:

KeyError: None

module 'bw2calc' has no attribute 'ComparativeMonteCarlo'

Hi,
After installing the latest version of Premise(1.0.0) I am running through problems with installed packages and get errors.
Right now I am not able to create any new database anymore cause of this error:
....................................................................................................................................................................

AttributeError                            Traceback (most recent call last)
Input In [1], in <module>
      1 from premise import *
----> 2 import brightway2 as bw

File ~\.conda\envs\premise\lib\site-packages\brightway2\__init__.py:3, in <module>
      1 # -*- coding: utf-8 -*
      2 from bw2data import *
----> 3 from bw2calc import *
      4 from bw2io import *
      6 __version__ = (2, 4, 1)

AttributeError: module 'bw2calc' has no attribute 'ComparativeMonteCarlo'

.........................................................................................................................................................................
I have the bw2calc version (1.8.0) and brightway2 (2.4.1) installed on my environment.

@cmutel @romainsacchi

a fork is the official publisher but this makes things "unclear"

I have two comments on the fact that the fork from @romainsacchi is the official publisher of the package: installation and issue creation.

1. Installation nightmare

The package is officially published from a fork (the one of @romainsacchi )
The documentation clearly says that to install the package, one has 2 options:

In my particular case, I use install brightway2 from (a virtual environment created with) conda, and I would rather have a "conda coherent" environment: as many packages as possible installed with conda, and only use pip as last option.

right now [2020-11-16] , pypi seems to have (semantic) version 0.1.6 of rmnd-lca package online, while the conda package is being built nightly and uses calendar labels (latest being -> 2020.11.13).

I'm more in favor of semantic versioning, but I think that at least the documentation should say a bit more about all this.
For example, if the developers intend to keep both packages, explain which is the intended use of the conda package (maybe, for edge, development builds) vs. the intended use of the pypi package: stable, proven use.

2. Issue creation

Issues are filed against the original repository, because it's impossible to file them against the one of the fork (I don't know if this can be changed).

Add GWP of hydrogen to impact category

Add GWP of hydrogen to air to impact category of climate change to account for H2 leakages in a hydrogen economny

GWP: 5.8 kg CO2-eq / kg H2
according to Derwent et al. 2006

Derwent, R.G., Simmonds, P.G., O'Doherty, S.J., Manning, A.J., Collins, W.J., & Stevenson, D.S. (2006). Global environmental impacts of the hydrogen economy. International Journal of Nuclear Hydrogen Production and Applications, 1, 57-67.

IPCC also uses 5.8 https://archive.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-10-3-6.html

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.