Coder Social home page Coder Social logo

attack68 / rateslib Goto Github PK

View Code? Open in Web Editor NEW
116.0 3.0 21.0 8.31 MB

A fixed income library for pricing bonds and bond futures, and derivatives such as IRS, cross-currency and FX swaps. Contains tools for full Curveset construction with market standard optimisers and automatic differention (AD) and risk sensitivity calculations including delta and cross-gamma.

Home Page: https://rateslib.readthedocs.io/en/stable/

License: Other

Python 62.70% Jupyter Notebook 1.29% Rust 36.01%
bonds fixed-income swaps trading cross-currency currency curves derivatives derivatives-pricing finance

rateslib's Introduction

rateslib
Python PyPi Conda Licence Status Coverage Code Style

Rateslib

Rateslib is a state-of-the-art fixed income library designed for Python. Its purpose is to provide advanced, flexible and efficient fixed income analysis with a high level, well documented API.

Its design objective is to be able to create a self-consistent, arbitrage free framework for pricing all aspects of fixed income trading, such as spot FX, FX forwards, single currency securities and derivatives like fixed rate bonds and IRSs, and also multi-currency derivatives such as FX swaps and cross-currency swaps. Options, swaptions and inflation are also under consideration for future development.

The techniques and object interaction within rateslib were inspired by the requirements of multi-disciplined fixed income teams working, both cooperatively and independently, within global investment banks.

Licence

This library is released under a Creative Commons Attribution, Non-Commercial, No-Derivatives 4.0 International Licence.

Get Started

Read the documentation at Rateslib Read-the-Docs

rateslib's People

Contributors

attack68 avatar bobjansen avatar xiar-fatah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

rateslib's Issues

FloatRateNote is failing to calculate cashflows

Hi! I've been trying to follow the example in the FloatingRateNote Guide, but there seems to be a bug in the definition of the fixing

I have not been able to calculate a working cashflow trying many combinations of the parameters

image

Can't replicate Bloomberg USD OIS Discount Curve

I have followed the cookbook for SOFR and applied it to the USD OIS discount curve. The rateslib discount factors match the discount factors from Bloomberg up to the 18M tenor but then start deviating. What do I need to do to match the Bloomberg discount rates?

image

The code is as follows:

import typing
import math
import pandas
import rateslib


if __name__ == '__main__':
    reference_date: rateslib.dt = rateslib.dt(2024, 3, 1)
    settlement_date: rateslib.dt = rateslib.dt(2024, 3, 5)

    # Bloomberg USD OIS Swap Rates
    swaps: typing.Mapping[str, float] = {
        "1W": 5.33000,
        "2W": 5.33260,
        "3W": 5.33300,
        "1M": 5.33535,
        "2M": 5.34110,
        "3M": 5.33440,
        "4M": 5.30550,
        "5M": 5.27900,
        "6M": 5.24035,
        "9M": 5.12030,
        "1Y": 4.98700,
        "18M": 4.65575,
        "2Y": 4.43105,
        "3Y": 4.13593,
        "4Y": 3.97872,
        "5Y": 3.88900,
        "6Y": 3.83837,
        "7Y": 3.80548,
        "8Y": 3.78547,
        "9Y": 3.77427,
        "10Y": 3.76773,
        "12Y": 3.76733,
        "15Y": 3.77146,
        "20Y": 3.73336,
        "25Y": 3.64168,
        "30Y": 3.54345,
        "40Y": 3.33623,
        "50y": 3.11935
    }
    data: pandas.DataFrame = pandas.DataFrame(
        data={
            "Term": list(swaps.keys ()),
            "Market Rate": list(swaps.values ()),
            "Maturity Date": [
                rateslib.add_tenor(start=settlement_date, tenor=tenor, modifier="MF", calendar = "nyc")
                for tenor in swaps.keys()
            ]
        }
    )

    ois = rateslib.Curve(
        id="OIS",
        convention="Act360",
        calendar="nyc" ,
        modifier="MF",
        interpolation="log_linear",
        nodes={
            **{reference_date: 1.0},
            **{_: 1.0 for _ in data["Maturity Date"]}
        }
    )

    ois_args = dict(effective=settlement_date, spec="usd_irs", curves="OIS")
    solver = rateslib.Solver(
        curves=[ois],
        instruments=[rateslib.IRS(termination=_, **ois_args) for _ in data["Maturity Date"]],
        s=data["Market Rate"],
        instrument_labels=data["Term"],
        id="us_rates"
    )

    data["Discount"] = [float(ois[_]) for _ in data["Maturity Date"]]
    data["Zero Rate"] = data.apply (
        lambda x : -100.0 * math.log(x["Discount"]) * 365.0 / (x["Maturity Date"] - reference_date).days,
        axis=1
    )
    with pandas.option_context("display.float_format", lambda x: "%.6f" % x):
        print(data[["Term", "Maturity Date", "Market Rate", "Zero Rate", "Discount"]])

Output is as follows:

SUCCESS: `func_tol` reached after 7 iterations (levenberg_marquardt) , `f_val`: 2.053280258071521e-15, `time`: 0.2629s
   Term Maturity Date  Market Rate  Zero Rate  Discount
0    1W    2024-03-12     5.330000   5.401229  0.998374
1    2W    2024-03-19     5.332600   5.401102  0.997340
2    3W    2024-03-26     5.333000   5.399085  0.996309
3    1M    2024-04-05     5.335350   5.397540  0.994838
4    2M    2024-05-06     5.341100   5.391176  0.990299
5    3M    2024-06-05     5.334400   5.373175  0.985967
6    4M    2024-07-05     5.305500   5.333618  0.981757
7    5M    2024-08-05     5.279000   5.295914  0.977478
8    6M    2024-09-05     5.240350   5.246586  0.973338
9    9M    2024-12-05     5.120300   5.096887  0.961789
10   1Y    2025-03-05     4.987000   4.937667  0.951308
11  18M    2025-09-05     4.655750   4.631095  0.932241
12   2Y    2026-03-05     4.431050   4.388055  0.915539
13   3Y    2027-03-05     4.135930   4.092952  0.884054
14   4Y    2028-03-06     3.978720   3.935079  0.853807
15   5Y    2029-03-05     3.889000   3.845085  0.824663
16   6Y    2030-03-05     3.838370   3.794752  0.795961
17   7Y    2031-03-05     3.805480   3.762279  0.768070
18   8Y    2032-03-05     3.785470   3.743005  0.740777
19   9Y    2033-03-07     3.774270   3.732889  0.714067
20  10Y    2034-03-06     3.767730   3.727593  0.688339
21  12Y    2036-03-05     3.767330   3.730668  0.638652
22  15Y    2039-03-07     3.771460   3.739299  0.570172
23  20Y    2044-03-07     3.733360   3.692612  0.477288
24  25Y    2049-03-05     3.641680   3.565600  0.409681
25  30Y    2054-03-05     3.543450   3.423756  0.357665
26  40Y    2064-03-05     3.336230   3.112623  0.287583
27  50y    2074-03-05     3.119350   2.773366  0.249599

FloatPeriod.npv() TypeError: 'float' object is not subscriptable

return 0.0 # payment date is in the past avoid issues with fixings or rates

Hi,
I've encountered TypeError trying rl.instruments.IRS.delta() for an IRS with i) one or more cash flows are past.

I found that the FloatPeriod.npv() seems to be somewhat unstable unlike its sibling function FixedPeriod.npv().

Since disc_curve_[self.payment] always returns zero when payment date is past the valuation date, it looks odd it return 'float' object.

[version 1]

       disc_curve_: Union[Curve, NoInput] = _disc_maybe_from_curve(curve, disc_curve)
      if not isinstance(disc_curve_, Curve) or curve is NoInput.blank:
          raise TypeError("`curves` have not been supplied correctly.")
      if self.payment < disc_curve_.node_dates[0]:
          if local:
              return {self.currency: 0.0}
          else:
              return 0.0 # payment date is in the past avoid issues with fixings or rates
      value = self.rate(curve) / 100 * self.dcf * disc_curve_[self.payment] * -self.notional
      if local:
          return {self.currency: value}
      else:
          fx, _ = _get_fx_and_base(self.currency, fx, base)
          return fx * value

[version 2] - Just delete

       disc_curve_: Union[Curve, NoInput] = _disc_maybe_from_curve(curve, disc_curve)
      if not isinstance(disc_curve_, Curve) or curve is NoInput.blank:
          raise TypeError("`curves` have not been supplied correctly.")
      value = self.rate(curve) / 100 * self.dcf * disc_curve_[self.payment] * -self.notional
      if local:
          return {self.currency: value}
      else:
          fx, _ = _get_fx_and_base(self.currency, fx, base)
          return fx * value

Thanks for your wonderful works.
Regards,

BUG: payment date and DF shown in FRA cashflows method

I'm currently trying to check how the bootstrapping of FRA works and came across the following using this reproducible part of code:

import rateslib as rl
import pandas as pd
from datetime import datetime

euribor_data = pd.DataFrame(
    {
        "Term": ["6M"],
        "Rate": [4.092],
    }
)
euribor_data["Termination"] = [rl.add_tenor(datetime(2023, 11, 2), _, "F", "bus") for _ in euribor_data["Term"]]

euribor_args = dict(curves="euribor6m", frequency="S", termination="6m", calendar="tgt")

euribor6m_curve = rl.Curve(
    id="euribor6m",
    convention="Act360",
    calendar="tgt",
    modifier="MF",
    interpolation="log_linear",
    nodes={
        **{datetime(2023, 10, 31): 1.0},  # <- this is today's DF,
        **{_: 1.0 for _ in euribor_data["Termination"]},
    },
)

instruments = [
    rl.FRA(datetime(2023, 11, 2), **euribor_args),  # 6M
]

solver = rl.Solver(
    curves=[euribor6m_curve],
    instruments=instruments,
    s=euribor_data["Rate"],
    instrument_labels=euribor_data["Term"],
    id="euribor6m"
)

print(rl.FRA(datetime(2023, 11, 2), **euribor_args).cashflows(euribor6m_curve).T)
Type FRA
Period Regular
Ccy USD
Acc Start 02/11/2023 00:00
Acc End 02/05/2024 00:00
Payment 02/11/2023 00:00
Convention ACT360
DCF 0.505555556
Notional 1000000
DF 0.97951153
Collateral ย 
Rate ย 
Spread -409.2

I found the DF to be very low for 2 days after the curve date. After checking the cashflows method of the FRA object, I see that npv_local is calculated based on the first item of self.leg1.schedule.pschedule, which in this case is 2023-11-2. In the FixedPeriod.cashflows() method, the FixedPeriod has a payment date of 2024-05-02, which is used in determining the discount factor. However, the payment date shown in the output is 2023-11-2. From my understanding, FRA are cash settled at the start (as per wikipedia) so I would the payment date and discount factor to be based on the Acc Start.

REF: error reporting in dual

raise TypeError(f"Dual operations defined between float, int or Dual, got: {type(argument)}")

dual.py line 437

DOC: Requesting a Cookbook Example of fitting a bond curve

Hi @attack68 ,

I've come across your excellent rateslib and books via this Quant Stackexchange post:
https://quant.stackexchange.com/questions/78698/bond-curve-fitting-practical-question

I wonder if a real example, based on the discussion in that post, of fitting such a bond curve (preferably US Treasury, but other DM Sovereign bonds work too) using rateslib could be added to the cookbook in the future.

Also, can rateslib extract the zero curve and the forward curve from a fitted curve?

Thanks!

Caps

Hi,

I hope all is well. For a cap implementation I have done some modifications for BaseMixin to create projected cashflow only for a floating leg:

Type Period Ccy Acc Start Acc End Payment Convention DCF Notional DF Collateral Rate Spread Cashflow NPV FX Rate NPV Ccy
leg2 0 FloatPeriod Regular USD 2022-01-03 2023-01-03 2023-01-03 Act360 1.013888889 -100000000 0.964861218 3.572006396 0 3621617.596 3494358.365 1 3494358.365
leg2 1 FloatPeriod Regular USD 2023-01-03 2024-01-02 2024-01-02 Act360 1.011111111 -100000000 0.939932404 2.623047212 0 2652192.181 2492881.373 1 2492881.373
leg2 2 FloatPeriod Regular USD 2024-01-02 2025-01-02 2025-01-02 Act360 1.016666667 -100000000 0.915515984 2.623236675 0 2666957.286 2441642.024 1 2441642.024

Then I can simply remove the cashflows which is not above the strike. However, for a trader's perspective what is the standard practice to view projected cashflows for a cap. Is it the theoretical price of the caplet at each cashflow, for example Black-76, or is it what is described initially - whether the forecast curve is above the strike or not? Could it be that both are interesting to view? Thanks in advance.

PERF: `dual_solve` for `fx_vector`

Profiling shows that dual_solve for the determination of fx_vector is quite slow. Due to the structure and sparsity of the matrix it should be possible to use a more direct algorithm to determine the dual gradients of the fx vector components.

ENH: Handling back stub interpolators

Rateslib can't handle interpolation of different IBOR tenors with a single forecast curve. The solution is to provide a mapping of curves. Since it is know whether a FloatPeriod is a stub or not the fall back to using this mapping can be added.

ENH: shifted curves which do not disassociate

It is useful to preserve AD by allowing creation of shifted curves which maintain an association to the underlying curve form which they are shifted from

Similarly for other Curve operations, although this might be more difficult to maintain (or much less performant?)

BUG: curve failure out of bounds?

dates = [
     dt(2000, 1, 3),
     dt(2001, 1, 3),
     dt(2002, 1, 3),
     dt(2003, 1, 3),
     dt(2004, 1, 3),
     dt(2005, 1, 3),
     dt(2006, 1, 3),
     dt(2007, 1, 3),
     dt(2008, 1, 3),
     dt(2009, 1, 3),
     dt(2010, 1, 3),
     dt(2012, 1, 3),
     dt(2015, 1, 3),
     dt(2020, 1, 3),
     dt(2025, 1, 3),
     dt(2030, 1, 3),
     dt(2035, 1, 3),
     dt(2040, 1, 3),
     dt(2050, 1, 3),
]
curve = Curve(
    nodes={_: 1.0 for _ in dates},
    t=[dt(2000, 1, 3)] * 3 + dates[:-1] + [dt(2050, 1, 11)] * 4
)
solver = Solver(
    curves=[curve],
    instruments=[
        IRS(dt(2000, 1, 3), _, spec="gbp_irs", curves=curve) for _ in dates[1:]
    ],
    s=[1.0 + _ / 25 for _ in range(18)]
)

DOC: Coming from QuantLib

Hi,

I started a small draft for users that are coming from QuantLib, which I assume some will do. I recall when I initially started viewing the documentations that it would be nice if there was a comparison page between QuantLib and Rateslib. However, I am not sure if it is relevant, if not please close this issue. I have yet to consider where it should exist in the documentation. The draft can be viewed at my fork:

Any feedback is appreciated. In addition in requirements.txt the following would need to be added QuantLib==1.31.1.

ENH: Interpolation Error Handling

Hi,

I would like to add the following lines:

if interpolation not in ['linear', 'log_linear', 'linear_zero_rate', 'flat_forward', 'flat_backward']:
    raise TypeError('interpolation must be one of "linear", "log_linear", "linear_zero_rate", "flat_forward", "flat_backward"')

to the interpolate function, in case the user mistypes the interpolation parameter. I can make a PR if it is okay.

The final code would be:

def interpolate(x, x_1, y_1, x_2, y_2, interpolation, start=None):
    if interpolation not in ['linear', 'log_linear', 'linear_zero_rate', 'flat_forward', 'flat_backward']:
        raise TypeError('interpolation must be one of "linear", "log_linear", "linear_zero_rate", "flat_forward", "flat_backward"')
    elif interpolation == "linear":

        def op(z):
            return z

    elif interpolation == "linear_index":

        def op(z):
            return 1 / z

        y_1, y_2 = 1 / y_1, 1 / y_2
    elif interpolation == "log_linear":
        op, y_1, y_2 = dual_exp, dual_log(y_1), dual_log(y_2)
    elif interpolation == "linear_zero_rate":
        # convention not used here since we just determine linear rate interpolation
        y_2 = dual_log(y_2) / ((start - x_2) / timedelta(days=365))
        if start == x_1:
            y_1 = y_2
        else:
            y_1 = dual_log(y_1) / ((start - x_1) / timedelta(days=365))

        def op(z):
            return dual_exp((start - x) / timedelta(days=365) * z)

    elif interpolation == "flat_forward":
        if x >= x_2:
            return y_2
        return y_1
    elif interpolation == "flat_backward":
        if x <= x_1:
            return y_1
        return y_2
    ret = op(y_1 + (y_2 - y_1) * ((x - x_1) / (x_2 - x_1)))
    return ret

DOC: cleanups

  • On the index page:
    • describe the curve highlights in more detail.
    • move the products up to the top and update
    • provide some graphics for curves.
  • Use top level import by default to make it easier for users. Re-write the Get Started.
  • On user guide re-write "basic Curve and Instrument" as fundamental constructors, or similar.
  • Remove the specific module imports and use from rateslib import *.
  • Add MultiCsaCurves to the section on UserGuide.
  • Add cookbook index to UserGuide
  • Updates Spread
  • Update FLY

DOC: re-organise docs for `instruments`

  • IRS (+Swap aliased)
    • Link spread examples elsewhere
  • SBS
  • FRA
  • ZCS
  • XCS
  • FixedFloatXCS
  • FixedFixedXCS
  • FloatFixedXCS
  • FXSwap
  • NonMtmXCS
  • NonMtmFixedFloatXCS
  • NonMtmFixedFixedXCS
  • FXSwap
  • FixedRateBond
  • Bill
  • BondFuture

BUG: LONG stub inference does not work for string roll

When using a string roll the dates can be before or after those inferred by numerical values.

Eg.

For a long front stub with dates (ueff=dt(2023, 3, 17), uterm=dt(2023, 12, 20) the expectation here is dt(2023, 9, 20) because the 15th March 23 is an IMM date so long stub will roll over the Jun IMM date into September (for a "Q" frequency).

For a long front stub with dates (ueff=dt(2022, 12, 19), uterm=dt(2023, 12, 20) the expectation is dt(2023, 3, 15) because the IMM in Dec 22 is 21st, so the long period runs over this date and to the next IMM date in March.

FEATURE: Sort the nodes dictionary when passed to Curve

Hi.

I came across the following

maturity = [rl.dt(2024,3,4), rl.dt(2024,3,11), rl.dt(2024,3,18)]

nodes = {}

nodes[rl.dt(2024,2,26)] = 1
for date in maturity: 
    nodes[date] = 1
curve = rl.Curve(nodes=nodes, interpolation="linear", id="ESTR")
instruments =  [
    rl.IRS(rl.dt(2024,2,26), "1W", "A", curves = "ESTR"),
    rl.IRS(rl.dt(2024,2,26), "2W", "A", curves = "ESTR"),
    rl.IRS(rl.dt(2024,2,26), "3W", "A", curves = "ESTR"),
]
solver = rl.Solver(curves=[curve], instruments=instruments, s=[3.906, 3.909, 3.907])

Which results in
SUCCESS: `func_tol` reached after 2 iterations (levenberg_marquardt) , `f_val`: 5.481547784569765e-14, `time`: 0.0053s
and

maturity = [rl.dt(2024,3,4), rl.dt(2024,3,11), rl.dt(2024,3,18)]

nodes = {}

for date in maturity:
    nodes[rl.dt(date.year, date.month, date.day)] = 1

nodes[rl.dt(2024,2,26)] = 1
curve = rl.Curve(nodes=nodes, interpolation="linear", id="ESTR")
instruments =  [
    rl.IRS(rl.dt(2024,2,26), "1W", "A", curves = "ESTR"),
    rl.IRS(rl.dt(2024,2,26), "2W", "A", curves = "ESTR"),
    rl.IRS(rl.dt(2024,2,26), "3W", "A", curves = "ESTR"),
]
solver = rl.Solver(curves=[curve], instruments=instruments, s=[3.906, 3.909, 3.907])

Which results in:
FAILURE: `max_iter` of 100 iterations breached, `f_val`: 36073720.658908844, `time`: 0.1328s
Where the difference is in where nodes[rl.dt(2024,2,26)] = 1 is given to node. Do you think it is worth to add a step to Curve that sorts nodes? I know that in QuantLib there is one for sorting all the benchmark instruments that are sent in but I am not certain if it is worth having one when I consider speed.

PERF: improve the brent root finder for bond yields

Previous the scipy brentq method was used. This was 6-10 times faster than the local brents implementation but it required a complete dependency to scipy. See #3.

Given the problem of finding yields from prices is a shallowly quadratic function there might be a more efficient means of finding the root.

BUG: Error running cashflows on IRS with a NPV

from rateslib import * 

par_curve = Curve( 
    nodes={
        dt (2022, 1, 1): 1.0, 
        dt (2023, 1, 1): 1.0, 
        dt (2024, 1, 1): 1.0,
        dt (2025, 1, 1): 1.0,
    },
    id="curve",
)
par_instruments = [
    IRS(dt(2022, 1, 1), "1Y", "A", curves="curve"),
    IRS(dt(2022, 1, 1), "2Y", "A", curves="curve"),
    IRS(dt(2022, 1, 1), "3Y", "A", curves="curve")
]

par_solver = Solver (
    curves=[par_curve],
    instruments=par_instruments,
    s=[1.21, 1.635, 1.99],
    id="par_solver",
    instrument_labels=["1Y", "2Y", "3Y"]
)

T_irs = IRS(
    effective=dt (2020, 12, 15),
    termination=dt (2037, 12, 15),
    notional=-600e6,
    frequency="A",
    leg2_frequency="A",
    fixed_rate=4.5,
    curves="curve"
)
    
print(T_irs.npv (solver=par_solver))
T_irs.cashflows (solver=par_solver)

File ~\anaconda3\Lib\site-packages\rateslib\periods.py:1090, in FloatPeriod._rfr_rate_from_df_curve(self, curve)
1088 def _rfr_rate_from_df_curve(self, curve: Curve):
1089 if self.fixing_method == "rfr_payment_delay" and not self._is_inefficient:
-> 1090 return curve.rate(self.start, self.end) + self.float_spread / 100
1092 elif self.fixing_method == "rfr_observation_shift" and not self._is_inefficient:
1093 start = add_tenor(self.start, f"-{self.method_param}b", "P", curve.calendar)

TypeError: unsupported operand type(s) for +: 'NoneType' and 'float'

ENH: `proxy_curves` JSON

Consider allowing a to_json method for proxy curves to mediate string exchange of data. Or possibly, just configure a to_json method for an FXForwards object, from which the ProxyCurve can be reconstituted.

Solver on 6M Euribor curve leads to odd DF's

Hi,

I hope all is well. I'm currently trying to match Bloomberg's bootstrapping algorithm for 6M Euribor for curve date 31/10/2023. I can fully match it for ESTR, so I assume I'm setting up my 6M Euribor curve wrong. I have mainly two questions:

  1. Is there anything I have to change in my euribor_args2 to better match the conventions of Bloomberg? They seem in line with what I see in the swap manager but maybe I'm missing something in the arguments.

  2. I encounter some issues in the DF's when including the 18M . I was wondering if you could provide more information on why the package behaves like this when calibrating the DF for this tenor compared to what Bloomberg is doing..

When excluding 18M tenor:

Term Rate Termination DF
0 6M 4.092 2024-05-02 00:00:00 0.979512
1 7M 4.045 2024-06-03 00:00:00 0.976215
2 8M 4.008 2024-07-02 00:00:00 0.973217
3 9M 3.946 2024-08-02 00:00:00 0.970127
4 10M 3.878 2024-09-02 00:00:00 0.967068
5 11M 3.8175 2024-10-02 00:00:00 0.964114
6 12M 3.733 2024-11-04 00:00:00 0.960977
7 15M 3.487 2025-02-03 00:00:00 0.953049
8 2Y 3.5924 2025-11-03 00:00:00 0.930604
9 3Y 3.37925 2026-11-02 00:00:00 0.903791
10 4Y 3.2876 2027-11-02 00:00:00 0.877128
11 5Y 3.259 2028-11-02 00:00:00 0.85001
12 6Y 3.26 2029-11-02 00:00:00 0.822759
13 7Y 3.27455 2030-11-04 00:00:00 0.795407
14 8Y 3.293 2031-11-03 00:00:00 0.768663
15 9Y 3.322 2032-11-02 00:00:00 0.741658
16 10Y 3.346 2033-11-02 00:00:00 0.715533
17 11Y 3.374 2034-11-02 00:00:00 0.689589
18 12Y 3.397 2035-11-02 00:00:00 0.664598
19 15Y 3.4305 2038-11-02 00:00:00 0.596691
20 20Y 3.3725 2043-11-02 00:00:00 0.510453
21 25Y 3.2455 2048-11-02 00:00:00 0.45087
22 30Y 3.14375 2053-11-03 00:00:00 0.401093
23 40Y 2.96525 2063-11-02 00:00:00 0.327755
24 50Y 2.8175 2073-11-02 00:00:00 0.277542

When including the 18M tenor:

Term Rate Termination DF
0 6M 4.092 2024-05-02 00:00:00 0.981576
1 7M 4.045 2024-06-03 00:00:00 0.976583
2 8M 4.008 2024-07-02 00:00:00 0.973919
3 9M 3.946 2024-08-02 00:00:00 0.971171
4 10M 3.878 2024-09-02 00:00:00 0.968429
5 11M 3.8175 2024-10-02 00:00:00 0.965814
6 12M 3.733 2024-11-04 00:00:00 0.963002
7 15M 3.487 2025-02-03 00:00:00 0.954074
8 18M 3.252 2025-05-02 00:00:00 1.49479
9 2Y 3.5924 2025-11-03 00:00:00 1.07837
10 3Y 3.37925 2026-11-02 00:00:00 1.0473
11 4Y 3.2876 2027-11-02 00:00:00 1.01641
12 5Y 3.259 2028-11-02 00:00:00 0.984983
13 6Y 3.26 2029-11-02 00:00:00 0.953405
14 7Y 3.27455 2030-11-04 00:00:00 0.92171
15 8Y 3.293 2031-11-03 00:00:00 0.89072
16 9Y 3.322 2032-11-02 00:00:00 0.859426
17 10Y 3.346 2033-11-02 00:00:00 0.829152
18 11Y 3.374 2034-11-02 00:00:00 0.799089
19 12Y 3.397 2035-11-02 00:00:00 0.77013
20 15Y 3.4305 2038-11-02 00:00:00 0.69144
21 20Y 3.3725 2043-11-02 00:00:00 0.591508
22 25Y 3.2455 2048-11-02 00:00:00 0.522464
23 30Y 3.14375 2053-11-03 00:00:00 0.464783
24 40Y 2.96525 2063-11-02 00:00:00 0.379799
25 50Y 2.8175 2073-11-02 00:00:00 0.321613

Hereby a working example of my code with the data included:

import rateslib as rl
import pandas as pd
from datetime import datetime

########################################################################################################################
# ESTR
estr_data = pd.DataFrame(
    {
        "Term": ["1W", "2W", "1M", "2M", "3M", "4M", "5M", "6M", "7M", "8M", "9M", "10M", "11M", "1Y", "18M",
                 "2Y", "3Y", "4Y", "5Y", "6Y", "7Y", "8Y", "9Y", "10Y", "11Y", "12Y", "15Y", "20Y", "25Y", "30Y", "40Y", "50Y"],
        "Rate": [3.89709997177124, 3.89860010147095, 3.90205001831055, 3.90700006484985, 3.91499996185303, 3.91925001144409,
                 4.91324996948242, 3.89674997329712, 3.87725019454956, 3.85500001907348, 3.82899999618531, 3.79800009727478,
                 4.76999998092652, 3.73399996757507, 3.50099992752076, 3.32200002670288, 3.10925006866456, 3.0239999294281,
                 2.99950003623962, 3.01049995422364, 3.02999997138977, 3.0605001449585, 3.09625005722046, 3.13425016403199,
                 3.18300008773804, 3.21140003204346, 3.2859001159668, 3.27040004730225, 3.17679977416992, 3.08430004119873,
                 2.96099996566772, 2.85899996757508],
    }
)

estr_data["Termination"] = [rl.add_tenor(datetime(2023, 11, 2), _, "F", "bus") for _ in estr_data["Term"]]

estr = rl.Curve(
    id="eurrfr",
    convention="Act360",
    calendar="bus",
    modifier="MF",
    interpolation="log_linear",
    nodes={
        **{datetime(2023, 10, 31): 1.0},  # <- this is today's DF,
        **{_: 1.0 for _ in estr_data["Termination"]},
    },
)
estr_args = dict(effective=datetime(2023, 11, 2), spec="eur_irs", curves="eurrfr")

solver_estr = rl.Solver(
    curves=[estr],
    instruments=[rl.IRS(termination=_, **estr_args) for _ in estr_data["Termination"]],
    s=estr_data["Rate"],
    instrument_labels=estr_data["Term"],
    id="eurrfr",
)

estr_data["DF"] = [float(estr[_]) for _ in estr_data["Termination"]]
with pd.option_context("display.float_format", lambda x: "%.6f" % x):
    print(estr_data)

########################################################################################################################
# EURIBOR

import pandas as pd

euribor_data = pd.DataFrame(
    {
        "Term": ["6M", "7M", "8M", "9M", "10M", "11M", "12M", "15M",
                 # "18M",
                 "2Y", "3Y", "4Y", "5Y", "6Y", "7Y", "8Y", "9Y", "10Y", "11Y", "12Y", "15Y", "20Y", "25Y", "30Y", "40Y",
                 "50Y"],
        "Rate": [4.092, 4.045000076, 4.007999897, 3.946000099, 3.878000021, 3.817500114, 3.732999802, 3.486999989,
                 # 3.252000809,
                 3.592400074, 3.37925005, 3.28760004, 3.259000301, 3.25999999, 3.274549961, 3.292999983, 3.322000027,
                 3.346000195, 3.374000072, 3.396999836, 3.430500031, 3.372499943, 3.245500088, 3.143749714, 2.965250015,
                 2.817500114],
    }
)
euribor_data["Termination"] = [rl.add_tenor(datetime(2023, 11, 2), _, "F", "bus") for _ in euribor_data["Term"]]

euribor_args = dict(curves="euribor6m", frequency="S", termination="6m", calendar="tgt")
euribor_args2 = dict(
    curves=["euribor6m", "eurrfr"],
    frequency="A",
    calendar="tgt",
    convention="Act360",
    leg2_frequency="S",
    leg2_convention="act360",
    leg2_fixing_method="ibor",
)

euribor6m_curve = rl.Curve(
    id="euribor6m",
    convention="Act360",
    calendar="bus",
    modifier="MF",
    interpolation="log_linear",
    nodes={
        **{datetime(2023, 10, 31): 1.0},  # <- this is today's DF,
        **{_: 1.0 for _ in euribor_data["Termination"]},
    },
)

instruments = [
    rl.FRA(datetime(2023, 11, 2), **euribor_args),  # 6M
    rl.FRA(datetime(2023, 12, 2), **euribor_args),  # 7M
    rl.FRA(datetime(2024, 1, 2), **euribor_args),  # 8M
    rl.FRA(datetime(2024, 2, 2), **euribor_args),  # 9M
    rl.FRA(datetime(2024, 3, 2), **euribor_args),  # 10M
    rl.FRA(datetime(2024, 4, 2), **euribor_args),  # 11M
    rl.FRA(datetime(2024, 5, 2), **euribor_args),  # 12M
    rl.FRA(datetime(2024, 8, 2), **euribor_args),  # 15M
    # rl.FRA(datetime(2023, 11, 2), **euribor_args),  # 18M
    rl.IRS(datetime(2023, 11, 2), "2y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "3y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "4y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "5y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "6y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "7y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "8y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "9y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "10y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "11y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "12y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "15y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "20y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "25y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "30y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "40y", **euribor_args2),
    rl.IRS(datetime(2023, 11, 2), "50y", **euribor_args2),
]

solver = rl.Solver(
    pre_solvers=[solver_estr],
    curves=[euribor6m_curve],
    instruments=instruments,
    s=euribor_data["Rate"],
    instrument_labels=euribor_data["Term"],
    id="euribor6m"
)

euribor_data["DF"] = [float(euribor6m_curve[_]) for _ in euribor_data["Termination"]]

with pd.option_context("display.float_format", lambda x: "%.6f" % x):
    print(euribor_data)

image

BUG: inferred stubs generated from unadjusted days can have zero length

In some cases, stub inference can produce dead stubs,

E.g. Sunday 2nd May 2027 through Wednesday 3rd May 2047.
The unadjusted roll day or 2 or 3 is invalid and a short front stub of Monday 3rd May 2027 is generated.

This creates a dead stub of zero length because Sunday 2nd May 2027 is modified forwards to the same inferred stub date.

ENH: cash instruments

Cashflow exists, but an FX spot transaction would be a useful instrument.
Also repos with pnl ability.

Can cashflows in various currencies be used used to manage settled cash in a portfolio?

Do we need a loan Instrument, can re use a fixed leg exhange.

BUG? _dcf_actacticma with leap year

Bill.dcf attribute throws large dcf years when using convention ActActICMA.
I think the problem is the effective date of 29th Feb 2024.

Could you please see if _dcf_actacticma working well for the leap year?

Replicate code as below,

from rateslib import Bill
from datetime import datetime as dt

bill_actacticma = Bill(
    effective= dt(2024, 2, 29),
    termination= dt(2024, 5, 29),   # 90 calendar days
    modifier='NONE',
    calendar= 'bus',
    payment_lag= 0,
    notional=-1000000,
    currency='usd',
    convention='ACTACTICMA',        
    settle=0, 
    calc_mode='ustb', 
)

bill_act360 = Bill(
    effective= dt(2024, 2, 29),
    termination= dt(2024, 5, 29),   # 90 calendar days
    modifier='NONE',
    calendar= 'bus',
    payment_lag= 0,
    notional=-1000000,
    currency='usd',
    convention='ACT360',        
    settle=0, 
    calc_mode='ustb', 
)

print(bill_actacticma.dcf)  # 8333333.333333333
print(bill_act360.dcf)        # 0.25

thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.