Coder Social home page Coder Social logo

nempy's People

Contributors

bje- avatar dependabot[bot] avatar mlee94 avatar nick-gorman avatar prakaa avatar yuexiao2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nempy's Issues

No slack attribute when unit set to outage with 0 max capacity and 0 volume bids

Hi there,

First of all, thanks for all your work on this package and excellent documentation!

I've encounter a small issue with some inputs I was trying. After some work, I found the issue was partly due to some iffy inputs of mine (stemming from an upstream model), but the error message I was getting wasn't particularly illuminating, so maybe something can be done to save someone else some debugging time.

Specifically, the inputs I used have a unit on outage. This is set by having the volume bids all set to 0, and the maximum capacity set to 0. With this, I get an exception thrown when nempy tries to read the slack of the constraint. I'm guessing the solver has dropped the constraint altogether as it is solved identically.

Below is a minimal example. Tried in nempy version 1.1.5, python-mip 1.15.0.

import pandas as pd
from nempy import markets

# Unit 'C' is on outage, energy bids all 0

volume_bids = pd.DataFrame({
    'unit': ['A', 'B', 'C'],
    '1': [20.0, 50.0, 0.0],
    '2': [20.0, 30.0, 0.0],
    '3': [5.0, 10.0, 0.0]
})

price_bids = pd.DataFrame({
    'unit': ['A', 'B', 'C'],
    '1': [50.0, 50.0, 0.0],
    '2': [60.0, 55.0, 0.0], 
    '3': [100.0, 80.0, 0.0]
})

unit_info = pd.DataFrame({
    'unit': ['A', 'B', 'C'],
    'region': ['NSW', 'NSW', 'NSW'],
})

# Max capacity also set to 0 for unit 'C'
max_capacity = pd.DataFrame({
    'unit': ['A', 'B', 'C'],
    'capacity': [50.0, 100.0, 0.0],
})

demand = pd.DataFrame({
    'region': ['NSW'],
    'demand': [120.0]
})

market = markets.SpotMarket(unit_info=unit_info, 
                            market_regions=['NSW'])
market.set_unit_volume_bids(volume_bids)
market.set_unit_price_bids(price_bids)
market.set_demand_constraints(demand)
market.set_unit_bid_capacity_constraints(max_capacity)

market.dispatch() 

The dispatch call raises AttributeError: 'NoneType' object has no attribute 'slack'
in InterfaceToSolver.get_slack_in_constraints

I think as a fix, at a minimum a more useful error message could be set here, but a more comprehensive change would be to have a default slack for eliminated constraints eg change

slack = constraints_type_and_rhs['constraint_id'].apply(lambda x: self.mip_model.constr_by_name(str(x)).slack,
                                                                self.mip_model)

to

slack = constraints_type_and_rhs['constraint_id'].apply(
    lambda x: getattr(self.mip_model.constr_by_name(str(x)), "slack", 0.0),
    self.mip_model
)

(Unrelated, but I think that second argument to .apply is unused)

This might have broader consequences though and maybe 0.0 isn't the correct default.

Cheers.

XMLCacheManager.populate_by_day

Hey @nick-gorman, very minor here but..

populate_by_day in XMLCacheManager seems to break if you pass end_month=12

def populate_by_day(self, start_year, start_month, end_year, end_month, start_day, end_day, verbose=True):
"""Download data to the cache from the AEMO website. Data downloaded is inclusive of the start and end date."""
start = datetime(year=start_year, month=start_month, day=start_day) - timedelta(days=1)
if end_month == 12:
end_month = 0
end_year += 1
end = datetime(year=end_year, month=end_month, day=end_day)

I think what's intended is...

if (end_month == 12) & (end_day == 31):
    end_day = 1
    end_month = 1
    end_year += 1

running on Windows

Large market simulation such as recreations of historical dispatch are failing on windows. This is accompanied by the message "Exit code -1073741819". The issue has been traced to the open source solver cbc, see coin-or/Cbc#325. While we wait for a proper solution, possible work arounds for windows users include:

  • running a linux virtual machine
  • installing the Gurobi solver, there are academic licenses available
  • trying the work around discussed here: coin-or/Cbc#325

Looks like plotly is missing in your requirements?

Thanks! Now it works. So you need to add pytest 6.2.5 to your requirements?

Next problem:

(nempyenv) C:\work\nempy\nempy\examples>python all_features_example.py
Traceback (most recent call last):
File "C:\work\nempy\nempy\examples\all_features_example.py", line 5, in
import plotly.graph_objects as go
ModuleNotFoundError: No module named 'plotly'

Looks like plotly is missing in your requirements?

Originally posted by @noah80 in #4 (comment)

`fcas_semi_scheduled` may be empty

Hi,

Great library, thanks for writing it and making it accessible to everyone.

I think there may be an issue in running earlier years that don't feature semi scheduled gens offering FCAS.

To reproduce what I think is the error, you can run the example code in README and substitute references to 2019 with 2015. I get an error in units._scaling_for_uigf which seems to arise because the fcas_semi_scheduled data frame is empty.

My workaround was to add a check for whether the data frame was empty:

    if not fcas_semi_scheduled.empty:
        # Scale high break points.
        fcas_semi_scheduled['HIGHBREAKPOINT'] = \
            fcas_semi_scheduled.apply(lambda x: get_new_high_break_point(x['UIGF'], x['HIGHBREAKPOINT'],
                                                                         x['ENABLEMENTMAX']),
                                      axis=1)

        # Adjust ENABLEMENTMAX.
        fcas_semi_scheduled['ENABLEMENTMAX'] = \
            np.where(fcas_semi_scheduled['ENABLEMENTMAX'] > fcas_semi_scheduled['UIGF'],
                     fcas_semi_scheduled['UIGF'], fcas_semi_scheduled['ENABLEMENTMAX'])

        fcas_semi_scheduled.drop(['UIGF'], axis=1)

        # Combined bids back together.
        BIDPEROFFER_D = pd.concat([energy_bids, fcas_not_semi_scheduled, fcas_semi_scheduled])
    else:
        # Combined bids back together.
        BIDPEROFFER_D = pd.concat([energy_bids, fcas_not_semi_scheduled])

The tests I have done so far are very close to what NEMDE produces so all seem to be working.

Happy to look at it more and submit a PR if helpful.

Add Very Fast Contingency FCAS constraints

Market implementation change added: RAISE1SEC (R1) and LOWER1SEC (L1) to the FCAS NEMDE Model. This went live Monday 9 October 2023 at 1.00pm (Market Time).

From AEMO's update to the FCAS model, these services will constrain FCAS availability in the same manner as the existing raise and lower contingency services.

Historical Inputs `BIDPEROFFER_D` post February 2021

Firstly, I would like to sincerely compliment all of the work completed on NEMpy thus far, the functionality and thorough documentation has been very helpful!

Following the 30 minute to 5 minutely bid changes around February/March 2021, the NEMweb monthly archives post February 2021 no longer seem to contain the PUBLIC_DVD_BIDDAYOFFER_D tables:
February 2021 monthly archive vs March 2021 monthly archive

As a result, the mms_db.DBManager.populate call fails when attempting to gather data for March 2021 onwards:

Downloading MMS table for year=2021 month=2
Downloading MMS table for year=2021 month=3
---------------------------------------------------------------------------
_MissingData                              Traceback (most recent call last)
~\AppData\Local\Temp\15/ipykernel_37364/3480331206.py in <module>
     34 download_inputs = True
     35 if download_inputs:
---> 36     mms_db_manager.populate(
     37         start_year=start.year,
     38         start_month=start.month,

...\.venv\lib\site-packages\nempy\historical_inputs\mms_db.py in populate(self, start_year, start_month, end_year, end_month, verbose)
    284                 self.DISPATCHREGIONSUM.add_data(year=year, month=month)
    285                 self.DISPATCHLOAD.add_data(year=year, month=month)
--> 286                 self.BIDPEROFFER_D.add_data(year=year, month=month)
    287                 self.BIDDAYOFFER_D.add_data(year=year, month=month)
    288                 self.DISPATCHCONSTRAINT.add_data(year=year, month=month)

...\.venv\lib\site-packages\nempy\historical_inputs\mms_db.py in add_data(self, year, month)
    683         None
    684         """
--> 685         data = _download_to_df(self.url, self.table_name, year, month)
    686         if 'INTERVENTION' in data.columns:
    687             data = data[data['INTERVENTION'] == 0]

...\.venv\lib\site-packages\nempy\historical_inputs\mms_db.py in _download_to_df(url, table_name, year, month)
    373     r = requests.get(url)
    374     if r.status_code != 200:
--> 375         raise _MissingData(("""Requested data for table: {}, year: {}, month: {} 
    376                               not downloaded. Please check your internet connection. Also check
    377                               http://nemweb.com.au/#mms-data-model, to see if your requested

_MissingData: Requested data for table: BIDPEROFFER_D, year: 2021, month: 3 
                              not downloaded. Please check your internet connection. Also check
                              http://nemweb.com.au/#mms-data-model, to see if your requested
                              data is uploaded.

Are there any plans to patch this and use an alternative table that is still supplied in the monthly archives, and/or do you have any advice for temporarily patching this in the meantime?

Thanks in advance.

Mismatch between bid files and dudetailsummary

Getting an error when running the solve for an old datetime.

unit_bid_limit is invalid against the unit_info.

Could the mms table DUDETAILSUMMARY not be bringing back the list of DUID's that submitted a volume/price bid for this datetime. The following DUID's are missing: ['ADPBA1G', 'ADPBA1L', 'BOWWBA1G', 'BOWWBA1L', 'BULBESG1',
'BULBESL1', 'DRXNDA01', 'DRXVDJ01', 'VBBG1', 'VBBL1', 'WALGRVG1',
'WALGRVL1', 'WANDBG1', 'WANDBL1']

Can you please try the repro below to check you receive the same error (adjusted for your db paths):


import sqlite3
from datetime import datetime, timedelta
import random
import pandas as pd
from nempy import markets
from nempy.historical_inputs import loaders, mms_db, \
    xml_cache, units, demand, interconnectors, constraints, rhs_calculator
from nempy.help_functions.helper_functions import update_rhs_values

con = sqlite3.connect('D:/nempy_2021/historical_mms.db')
mms_db_manager = mms_db.DBManager(connection=con)

xml_cache_manager = xml_cache.XMLCacheManager('D:/nempy_2021/xml_cache')

interval = '2021/12/08 15:30:00'
print(str(c) + ' ' + str(interval))
raw_inputs_loader.set_interval(interval)
unit_inputs = units.UnitData(raw_inputs_loader)
interconnector_inputs = interconnectors.InterconnectorData(raw_inputs_loader)
constraint_inputs = constraints.ConstraintData(raw_inputs_loader)
demand_inputs = demand.DemandData(raw_inputs_loader)
rhs_calculation_engine = rhs_calculator.RHSCalc(xml_cache_manager)

unit_info = unit_inputs.get_unit_info()
market = markets.SpotMarket(market_regions=['QLD1', 'NSW1', 'VIC1',
                                            'SA1', 'TAS1'],
                            unit_info=unit_info)

# Set bids
volume_bids, price_bids = unit_inputs.get_processed_bids()
market.set_unit_volume_bids(volume_bids)
market.set_unit_price_bids(price_bids)

# Set bid in capacity limits
unit_bid_limit = unit_inputs.get_unit_bid_availability()
market.set_unit_bid_capacity_constraints(unit_bid_limit)
Full Traceback: Traceback (most recent call last): File "/opt/pycharm-2022.3.1/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode coro = func() File "", line 23, in File "/home/michael/Documents/Gitrepos/nempy/nempy/markets.py", line 467, in set_unit_bid_capacity_constraints self._validate_unit_limits(unit_limits) File "/home/michael/Documents/Gitrepos/nempy/nempy/markets.py", line 477, in _validate_unit_limits schema.validate(unit_limits) File "/home/michael/Documents/Gitrepos/nempy/nempy/spot_markert_backend/dataframe_validator.py", line 29, in validate self.columns[col].validate(df[col]) File "/home/michael/Documents/Gitrepos/nempy/nempy/spot_markert_backend/dataframe_validator.py", line 62, in validate self._check_allowed_values(series) File "/home/michael/Documents/Gitrepos/nempy/nempy/spot_markert_backend/dataframe_validator.py", line 79, in _check_allowed_values raise ColumnValues("The column {} can only contain the values {}.".format(self.name, self.allowed_values)) nempy.spot_markert_backend.dataframe_validator.ColumnValues: The column unit can only contain the values 0 AGLHAL 1 AGLNOW1 2 AGLSITA1 3 AGLSOM 4 ANGAST1 ... 537 YWNL1 538 YWPS1 539 YWPS2 540 YWPS3 541 YWPS4 Name: unit, Length: 542, dtype: object.

New Feature: Add interconnector limit setter

Allow analysts to examine interconnector limits and the associated constraints "setting" these interconnector limits.

These can be compared against the DISPATCHINTERCONNECTORRES table from MMS.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.