Coder Social home page Coder Social logo

osu-performance's Introduction

osu!performance dev chat

This is the program computing "performance points" (pp), which are used as the official player ranking metric in osu!.

Current Versions

This is part of a group of projects which are used in live deployments where the deployed version is critical to producing correct results. The master branch tracks ongoing developments. If looking to use the correct version for matching live values, please consult this wiki page for the latest information.

Compiling

All that is required for building osu!performance is a C++11-compatible compiler. Begin by cloning this repository and all its submodules using the following command:

$ git clone --recursive https://github.com/ppy/osu-performance

If you accidentally omitted the --recursive flag when cloning this repository you can initialize the submodules like so:

$ git submodule update --init --recursive

osu!performance runs on Windows, macOS, and Linux. The build environment is set up using CMake as follows.

Windows

Open the command line and navigate to the root folder of this repository.

osu-performance> mkdir build
osu-performance> cd build
osu-performance\build> cmake ..

Now the build folder should contain a Visual Studio project for building the program. Visual Studio 2017 and a 64-bit build are recommended (cmake -G "Visual Studio 15 2017 Win64" ..).

macOS / Linux

On macOS / Linux you need to install the MariaDB MySQL connector and cURL packages. Afterwards, in a terminal of your choice, do

osu-performance$ mkdir build
osu-performance$ cd build
osu-performance/build$ cmake ..
osu-performance/build$ make -j

Sample Data

Database dumps with sample data can be found at https://data.ppy.sh. This data includes the top 10,000 users along with a random 10,000 user sample across all users, along with all required auxiliary tables to test this system. Please note that this data is released for development purposes only (full licence details available here).

You can import these dumps to mysql (after first extracting them) by running cat *.sql | mysql. Note that all existing data in tables will be dropped and replaced. Make sure to import the latest available data dumps as older snapshots may be incompatible with the latest version of osu!performance.

Usage

First, set up a MySQL server and import the provided data from above which is most relevant to your use case. Next, edit bin/config.json with your favourite text editor and configure mysql.master to point to your MySQL server.

After compilation, an executable named osu-performance is placed in the bin folder. You can use it via the command line as follows:

./osu-performance COMMAND {OPTIONS}

where command controls which scores are the target of the computation. The following commands are valid:

  • all: Compute pp of all users
  • new: Continually poll for new scores and compute pp of these
  • scores: Compute pp of specific scores
  • users: Compute pp of specific users
  • sql: Compute pp of users given by a SQL select statement

The gamemode to compute pp for can be selected via the -m option, which may take the value osu, taiko, catch, or mania.

Information about further options can be queried via

./osu-performance -h

and further options specific to the chosen command can be queried via

./osu-performance COMMAND -h

Configuration options beyond these parameters, such as various API hooks, can be adjusted in bin/config.json.

Docker

osu!performance can also be run in Docker.

Configuration is provided via environment variables or by mounting the config file at /srv/config.json.
Availables environment variables:

MYSQL_HOST
MYSQL_PORT
MYSQL_USER
MYSQL_PASSWORD
MYSQL_DATABASE

MYSQL_SLAVE_HOST
MYSQL_SLAVE_PORT
MYSQL_SLAVE_USER
MYSQL_SLAVE_PASSWORD
MYSQL_SLAVE_DATABASE

MYSQL_USER_PP_TABLE_NAME
MYSQL_USER_METADATA_TABLE_NAME

WRITE_ALL_PP
WRITE_USER_TOTALS

POLL_INTERVAL_DIFFICULTIES
POLL_INTERVAL_SCORES

SENTRY_HOST
SENTRY_PROJECTID
SENTRY_PUBLICKEY
SENTRY_PRIVATEKEY

DATADOG_HOST
DATADOG_PORT

Example:

docker build -t osu-performance .
docker run --rm -it \        
  -e MYSQL_HOST=172.17.0.1 \
  -e MYSQL_USER=osu \
  -e MYSQL_PASSWORD=changeme \
  -e MYSQL_DATABASE=osu \
  osu-performance all -m osu

A docker-compose.yml file is also provided, with a built-in MySQL and phpMyAdmin server provided for convenience.
It supports having *.sql files in a dump folder, such as those found at https://data.ppy.sh, for import at first start.

Licence

osu!performance is licensed under AGPL version 3 or later. Please see the licence file for more information. tl;dr if you want to use any code, design or artwork from this project, attribute it and make your project open source under the same licence.

Note that the sample data is covered by a separate licence.

osu-performance's People

Contributors

mbmasher avatar millhioref avatar nekodex avatar numbermaniac avatar peppy avatar smoogipoo avatar stanriders avatar thepoon avatar tom94 avatar vendethiel avatar xexxar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

osu-performance's Issues

Scaling miss penalization according to the number of hitobjects in a map

Draft

At the moment, there are in the code these two parts which penalizes players if they make a miss. These misses are not scaled so longer maps got much lower pp count plays in average.

	// Penalize misses exponentially. This mainly fixes tag4 maps and the likes until a per-hitobject solution is available
	_aimValue *= pow(0.97f, _numMiss);

	// Penalize misses exponentially. This mainly fixes tag4 maps and the likes until a per-hitobject solution is available
	_speedValue *= pow(0.97f, _numMiss);

This could be simply fixed by multiplying the power by numTotalHits/576 ratio as shown here. By this, penalization would be scaled according to the hitobject count.

	// Penalize misses exponentially. This mainly fixes tag4 maps and the likes until a per-hitobject solution is available
	_aimValue *= pow(0.97f, _numMiss*(numTotalHits/576));

	// Penalize misses exponentially. This mainly fixes tag4 maps and the likes until a per-hitobject solution is available
	_speedValue *= pow(0.97f, _numMiss*(numTotalHits/576));

Why 576

Average map in osu! is a 90 - 120 seconds long. These maps have mostly 550 - 600 hitobjects so mean is 575 so the 576 is pretty close. In my opinion if average hitobject count would be counted, 576 would be closer than 575.

576 is 1001000000 in binary which makes it faster for computers to work with (it has only one '1' digit in the number if we don't count the first digit) so the computation won't take much longer.

Edit

I think that numTotalHits is count of hitobjects. Correct me if I am wrong. I also forgot that numTotalHits nor 576 is not float so it will have to be written as numTotalHits/576f but you got the point.

Taiko Star Rating/Performance System Suggestions

Sorry if my wording is a bit awkward, I don't really write long formal explanations like this often.

Alright, so first of all, the issues with taiko star rating/pp system right now -

  • The way strain is calculated results in many patterns being overweighted.
    This is a very simple point, but this accounts for the majority of the issues and is caused by a multitude of factors. The specific patterns i'm talking about are mostly doubles and patterns that incorporate 1/4 mixed with 1/6, but there are many others as well.

So, I'll do a quick breakdown of what causes most of these issues. Taiko star rating is calculated through strain, similarly to standard. Each object decays in strain based on the distance from the previous object (this results in higher star rating on faster maps), adds a fixed amount of strain, and then other bonuses, of which there are two - color and rhythm, each of which has some problems.

So, first, rhythm. The rhythm bonus occurs any time the gap between two objects is different from the previous gap within a certain margin. If it changes by an amount that is not a multiple of two (1/2, 2x, 1/4, 4x, etc) a fixed bonus is added to strain. This results in doubles being overweighted, since it applies this bonus on every single note if they're spaced properly, and on 1/6+1/4 every time it switches between the two.

Second, color. The color bonus occurs any time a map goes from an even number of one color to an odd number of the other color, or vice-versa. Similarly to the rhythm bonus, the color bonus is a fixed value, which results in patterns that swap color in this way being worth much more than patterns that do not. This is what results in overweight on patterns such as kddkddk, as well as contributing to 1/6+1/4 patterns due to them often switching between even and odd numbers of objects due to the way the are usually constructed.

The way strain was calculated was fine at first, since most older taiko maps were mapped in a similar style to the official taiko games (I think? Don't quote me on this.), in which the kinds of patterns that would result in excessive boosting were relatively uncommon, and would probably be fine if it was for the actual taiko game, in which such patterns would likely be much more difficult, but with the way osu!taiko has developed, it's just not really accurate enough anymore.

So, proposals as to how to adjust these -
For rhythm, rather than a fixed bonus regardless of change, different fixed degrees of change result in different fixed bonuses. Instead of speedup and slowdown being treated equally, slowdown will generally be worth less.

Color is a bit more complicated; Instead of only receiving a bonus when the number of objects changes from odd to even/even to odd, it occurs each time color switches, with the bonus being based on the number of objects of the same color, and reduced based on a few factors - even numbers of objects are worth less, and if the same number of objects is repeated multiple times (separate for each color) the value is decreased for each repetition.

  • The star rating of a map can change simply by adjusting the position of all objects relative to the start of the map.
    This is an issue caused by how final star rating is composited from the strain values determined for the objects. Fixed sections are determined, starting from position 0 in the map, and moving a set distance for each section (adjusted when dt). The highest strain value from each section is taken and used to determine star rating. As the intention of this is to reduce the effects of short points of high strain, it has an issue that two objects with high strain right next to each other can both be counted if it happens to change sections right between them.

For this, instead of going from start to end, it goes in order from the highest strain object to the lowest strain object in the entire map. For each object, it "dequalifies" all objects within a fixed distance from that object, so after getting the highest strain object at a point, all the nearby objects with similar but slightly lower strain would not be counted when it reaches them.

  • Slider and spinner spam results in excessive star rating - this is not technically an important problem, as any map that would have this issue would be unrankable, but if it wanted to be fixed, all that would need to be done is to not add any strain on sliders/spinners, instead of still adding the fixed amount of strain that all objects do. This is more of an unimportant change, though, as some mappers might appreciate making maps with silly star rating using this bug/gimmick/feature (ex. https://osu.ppy.sh/s/758043)

That's pretty much it for star rating, so next is some comments on how pp value of a map is calculated.

I haven't gone nearly as depth into how this is calculated, but I do have some things that I would suggest adjusting.

  • Maps are worth more based on the number of objects
    Concept-wise, this is fine, since logically, a longer map would require more consistency and therefore be worth more. The problem comes when maps are mostly easy, with very short points of difficulty, this results in an unnecessary bonus. So, for this, instead of directly using the number of objects in a map, it should use a weighted object count based on the strain of objects. The closer a strain of an object is to the highest strain in the map, the closer its weight is to 1. The further it gets from the hardest strain in the map, it gets closer to 0. This lets you see if a map has relatively consistent difficulty, or just a few points with high strain and the rest of the map merely inflating the object count.

  • Misses devalue long, consistently challenging maps a bit too much
    Mostly, the way misses work is fine. I just think that maps with a higher weighted object count (from previous point) should have the effect of misses be lower than objects with a low weighted object count. Basically, longer, consistently challenging maps, where it's easier to get a higher miss count have the effect of misses slightly reduced compared to shorter maps.

- OD is worth way too much
Right now, the difference in value between plays is way too severely affected by the OD of the map.
As an example - 7* 2500 object 99% acc at OD 7 = 385 pp
7* 2500 object 99% acc at OD 10 (or 9.8, which is what you get with OD7 + HR) = 456 pp
While the higher OD certainly does make it much more difficult to get such good accuracy, the difference right now is just a bit TOO high. As a much more extreme example -
3* 2500 object SS at OD 4 is 167 pp, which is probably still reasonable. But, the same map at OD 10 (and yes I know a 3* with OD 10 would never get ranked) is worth 300 pp, which is way too much for a 3 star map. Just, overall, it needs a bit of toning down.

Alright, after some thought, OD changes are probably unnecessary to make right away. With the way OD affects pp is calculated the other changes would already have a pretty large effect, so it can be seen if OD is still a problem later.

Explanation of overall results of these changes -
1/6+1/4 maps have their star rating decreased. It will still be relatively high, but this will be balanced out by the weighted object count which means that maps that only have a few short bursts of 1/6 inflating the star rating will be worth much less, while maps that use a large amount and are therefore more challenging should still be worth a reasonable amount.

Some extreme examples of technical maps will still be underweighted, and some probably more than before. (ex. https://osu.ppy.sh/s/742538), but it's hard to avoid this.

Speed maps are generally higher in star rating due to multiple factors (with bonus from rhythm and color reduced to some degree, more of the total value of maps comes from the base of just speed)

Due to reduction of bonuses from rhythm, certain maps where easier diffs have a higher star rating than the difficulty above them are mostly fixed. (ex. https://osu.ppy.sh/s/138886 where futsuu > muzu)

Probably some other effects, but I can't think of anything else important off the top of my head.

I'd appreciate it if you would comment thoughts, or any ideas of other improvements.

If you want to see an example of implementation -
Github of visual studio project - https://github.com/Alchyr/taiko
This is coded in Visual Basic, mostly for convenience of the data display, but the method of implementation should be compatible with how star rating is currently coded, it would just need to be adjusted to the correct language (and cleaned up a bit). While the star rating can be treated as relatively balanced, the pp value is mostly just experimental.

Direct download of tool - https://drive.google.com/file/d/1TefHv5g1UCYuFt0U09wHzMlS7e1hPJDz/view?usp=sharing
This is just the compiled .exe from that project. It calculates both the old and new star rating as well as some other values for each taiko map in your osu!/songs folder. If you don't trust this, just use the other link and compile it yourself after checking the code if you want.

Well, thank you for taking the time to read this. Hopefully my explanations made sense to you and weren't too poorly worded or confusing.

Addition of pp to slider based on Slider Velocity

I'm proposing a addition to slider pp from the base pp using weighted average summation of SV*bpm.

final_pp=base_pp+weightedavg_silder_pp
weightedavg_silder_pp=((SV1*bpm1+SV2*bpm2+...+SVn*bpmn)/n*SV)*-log0.002(number of sliders)) ; where n represents the number of silder timing points present in the map
if number of sliders < 1 set weightedavg_silder_pp=0

Acc rebalance for speed pp

(In collaboration with Dumii and VINXIS)

Acc scaling for speed pp right now is pretty terrible - it's linear. Also, the effect OD has on the scaling is pretty low. Things like this is what cause people to gain pp from doubletap plays on streamy EZDT maps (and this will soon be further amplified when Xexxar's speed buff goes through).

This is what me, Dumii and VINXIS has come up with. We have changed the old linear curve to a sigmoid curve. This means that everything under a certain acc threshold gets nerfed pretty harshly, while anything above gets a very slight buff. OD also has a large effect. If the OD is lower, that acc threshold becomes higher, and thus low acc plays are nerfed more. Link to calculator in Desmos

Here are examples of people's profiles with this change:

Gayzmcgee - most of his plays get a slight buff as none of them are really low acc enough to get nerfed.
exc - the maps in his top that get nerfed are EZHDDT plays which range from 80-90 acc.
404 AimNotFound - The selected play in the screenshot is an EZDT play on a speed map with 87% acc.

Would be cool to hear some thoughts from the general community before we push this change further. Thank you for reading!

Touchscreen acc pp buff proposal

Holding decent accuracy is a much harder skill on touchscreen than on mouse/tablet, and many plays are underweighted to the point where even 500pp touch plays are rare. This is why I am proposing a change to the calculation of touch acc pp.

Old: 1.52163^od * acc^24 * 2.83
New: 1.58^od * acc^7 * 3

Here is a link to a desmos graph which visually shows these changes. As you can tell from the graph or the formula, the acc curve is much less harsh and the base multiplier has also slightly increased.

Example Values: (comparing live values to values with recent changes and this proposal)
freedomdiver: 8291.0pp -> 8782.1pp
EbonSol: 8292.5pp -> 8759.3pp
yeahbennou: 6746.8pp -> 7191.4pp
itsamemarioo: 7867.4pp -> 8277.5pp

Thank you for reading, and it would be nice if I could get some feedback.

Proposal: Nerf pp if a large amount of players has the same play with mods

Explanation: we all know what maps are called "pp maps", and how overweighted are them. My proposal is the following: if the map has more than 50% of the player with the current mods FC, nerf the pp exponentially.
This means for example, if there are 1000 players that has HDHR a "X" beatmap, which 40%~50% or more of them are FC, pp should be nerfed in a short percentage. This could be also applied for SS in particular, nerfing even more the pp (in the same case if SS's were more than 50% of the plays in the map with the current mods)

Maybe I could explain it more detailed but in short words this is my proposal. I know it's easier to say it than doing it, but I guess it's actually the best way to balance the overweighted beatmaps.

How to Revive Mapping Completely and Fix the PP System Entirely

The point of this is to explain my ideas, not executing them in code. I don't know how to code but I know that you can do a ton with it.

Having that in mind, all these are extremely possible and all we need is time and effort.
If nobody steps up I will seriously learn how to code these kinds of things myself.
And no, I'm not that confidant :l

Index:

1. Introduction 🔍
2. The AI and Maps 👾 🎼
3. Farming and the Impact Loop 🌾 🌌
4. The League System 👑
5. (Optional) Complete Destruction 💣 💥

Numbers that separate portions of discussion and ideas are separated with lines to make navigation easier.

1. 🔍 -----------------------------------------------------------

I will run by finished ideas I've been thinking about intensely, regarding the pp farming problems and how it killed mapping, and how to fix both
Oh, this also causes a lot of players to quit too.
(As you know, people heavily overplay maps that gain more pp, it's just a short song with distanced jumps. The grinding starts when you turn on dt and hd, causing them to play something mapped easily because of difficulty requirements really fast, and really easily. From my knowledge this means pp is a lot harder to exploit gains in other game modes since there is no such thing as a distanced jump other than in catch)

👂 I highly encourage feedback and discussion, I am open to criticism and will not hide from it. 💬

I'm telling you this because this is a problem we all want fixed."The PP Paradox" Is how I name the issue, since it rewards points ("Performance Points") based on performance for performing less and avoiding
discussion and criticism is always a terrible idea and will make reaching the goal of this much harder.
Remember that no ranking system can be perfect, there will be flaws but let's make this as good as possible.

2.👾 🎼 -----------------------------------------------------------

Repetitions on the same kinds of songs, maps and ideas is not performing better, it's performing the same, and thus breaks down instantaneously, since performing less ≠ performing more, it's contradictory to a performance increase, a paradox.

Change how the AI gives gains, as an add-on to what's there already.
If there's too much repetition in style, then lower the gain, this will encourage mappers to change the style of the map over time and not repeat the same patterns too many times and every time
The current super pp gain style is a bunch of largely distanced jumps in a back and forth pattern. The AI should calculate and deduct PP gains if someone is exploiting these mapping styles.

There are countless ways to map the same thing, best seen in guest remaps of ranked songs

Song speed should be considered for repetition
Since, if the song is extremely fast paced it can be hard to cram so many different methods in one, this is canceled out because fast paced maps gain more pp, because they are harder and repetition makes the map lose pp, but the loss is still enough to encourage mappers to change it up because more pp = more players. It balances perfectly, and should increase diversity, but we're not done so quickly.

The loss shouldn't be too much as long as the repetition is reasonable.

Use a perfect server side player dummy, one that doesn't snap unnecessarily (auto does that) and reads curves and sharp angles between jumps and flow.
This can be thought of by thinking like the cursor drawing a marker across the screen as it moves, like a marker and whiteboard and determines pp based on how fast these movements are and if they make sense. A better AI that calculates flow, then reads if the map follows a constant pattern and isn't predictable or repetitive. (sliders, jumps, sliders, repeat, is a common farm repetition)

It would be best to consult some less stereotypical mappers about what makes a map different and fun vs a farm map, then incorporate it in a way to allow fun maps to provide more pp and decrease pp on maps that give too much because of jumpy repetition.
I am no pro mapper, but easily spot a farm map when I see one

Each part of the music should be mapped differently except for chorus every time the tune shows up again each segment.(Drums, guitar :Chorus: Guitar, Drums, Guitar, Drums, Guitar :Chorus: , etc.) can be mapped and played differently.

How this is coded I am still quite unsure but what I am sure in is the possibility of it, that's why I am just proposing ideas.

3.🌾 🌌 -----------------------------------------------------------

If you're unfamiliar with farming, I'm surprised.
It's just playing tons of jumpy low star maps with dt hd on, best seen in low to med triple digit players.
(and in intense farmers, double digits!)
And it's about the best they can do, otherwise they "tank in rank" trying to improve their actual performance instead of grinding.
Sure, it's their choice to grind, but that doesn't mean they want to, it creates a conflict in Do I want to keep my rank, but grind, or lose my rank and try to improve? And just by jumping through profiles people are getting bored and leaving because of the paradox.
The worst part is how successful grinding is, so even if they made the change most of the time they're just back to grinding. And the loss made trying to improve is colossal.
I came up with this:

4. 👑 -----------------------------------------------------------

Make groups of leagues in the ranked board.
like other games. It's not a bad idea, and that's why it's used so often!
However, there's a catch,
every time you increase your league (I.E. Copper, silver, gold) decrease the amount of pp gained from certain star songs.Example:Dust: AFK league (explained soon)Copper: 1-2 Silver: 2-3 Gold: 4-5 etc.(2 shown twice since transition time from 2-3 is much shorter)

If someone goes inactive for too long they should drop to dust. In dust, plays are worth no pp and there is no ranking.
The only way to leave dust is to make a play that would equal the same star ranking as what you were in when you went inactive, and this could only happen if the inactive player is no longer rusty, so it works perfectly. (I.E. In gold, dropped to dust, does a 4 or 5 star, back in gold)
(The play that got them out of dust should grant pp)
I will simplify since I'm not sure what to call the leagues, but leagues should climb in decimals and divide the leagues into semi leagues.
Example: Copper I: 1-1.5 Copper II: 1.5 - 2
It's best to divide semi leagues in 2 because it will help players recognize if that person is in the transition period of going to the next league or just entering it, just with a glance at their profile.Good players by now shouldn't have any problem at all doing 6 and 7 star, the only problem comes when replaying a song.Replaying songs early on when you're still learning is terrible, but at their level, play style won't make sudden changes, and when you replay something and finally do it, there is pure joy to be found in it. So replaying songs is part of being in a high league and promotes getting better.
And that's the whole point of pp

5. _ 💣 ❓ -----------------------------------------------------------_

If it's too broke, hell, don't fix it. I'm familiar with the poll you did on Twitter, and that's why this is our last resort.
Removing a pp system to gauge ranking removes pp farming and keeps ranking.
Changing what ranks a player is a good idea.
IoExceptionOsu suggests a system based on just tournaments.
✔️ Pros: You can focus on different aspects of skill, aiming, streams, etc.
❌ Cons: Slower process, only ranks on how well a player did on a specific map or maps.

But this isn't something I'm discussing because it's only the plan B.
That doesn't mean we shouldn't consider it, but we need to remember it's a fallback, and just that.
I will look into what we can rank players on if there's no possible way to fix the current system.
The goal of this is finding what we can do to fix the system, not what can replace it.

Are you about to comment?

If it's something wrong with my fix(es) take your time and think, see if you can find a solution to it at the same time.

--

Todo:

(Todo list will increase with unsolved things, look here if you want to contribute, other than an error)

  • Get a good league theme that has a theme that relates to Osu!

--

To contact me directly: [email protected] or pm me

Version Number: 1.0

Version will increase with comments that fix an error in what's posted, big or small I'll edit it in
Credit will be given where the fix or idea is edited into

Working for a Speed Difficulty meter

Remember that this topic's proposal is at the moment very incomplete and is continuously growing. Once mentioned this, let's check the proposal.

Hi, this topic is about finding a solution to speed variable by rationalizing it. The intention of this topic is giving speed difficulty enough independence from aim. Meaning that what we are searching is for a way to evaluate speed without being heavily influenced by aim.

On here we're assuming speed is mostly your general tapping speed meaning that this variable will grow the faster you tap without too much influences of aim.

This is a very basic speed formula which I'm going to explain
speed 1 0

1.- First we do our calculus of 60/bpm and then a division with X which is the kind of tapping (1/2 , 1/3 , 1/4 , etc) and we a get a separation in time between some circles. There's nothing new on this part. For this we use seconds as our unit of measurement not ms.
2.- To get the SSRD (Speed Star Ratio Difficulty) in a part we use the formula on the step's 1 obtained value.

For this we get the ^-1/3 potency of the result in the first parentesis [time interval (in seconds)].

Let's say a stream practice map of 180 at 1/4's will be 2,28942SSRD while a "stream compilation" with only 1/2's of 180 bpm will be 1,81712SSRD

But hey! we're talking about a map of ONLY that is only consistent single tapping 180's (circles only) which is why the speed star rating is that HIGH.. Note that maps commonly have sliders, little breaks or slow parts that will decrease its speed star difficulty.

Obviously, the higher average SSDR that a map is exposed to it will increase its Speed Star Rating and obviously length will have an impact on this. But, if the stream parts are consistent in a map, those will matter the most in SSDR (note that if a map is mostly about single tapping but have few streams it won't raise that much in SSDR; while a slower map that is more consistent in streams will have higher SSDR)

SSR (Speed Star Rating) will be continuously growing faster the higher is the SSDR it's exposed to while lower SSDR will degrade (a bit) the global SSR by a kind of averages. Obviously peaks will have certain impact in SSR in a permanent way (but, the longer the map is and the stream disappears, it will be slowly losing SSR if following patterns are only single taps)

3.- At the moments we have no idea how to compute Z variable, but basically Z is a value between 1 and 1.50 (limit is arguable, but for higher values it will becoming much more difficult) which would help spaced streams (A variable that depends on aim bonus) and aim flow in overall. Z bonus might be much lower in jumps of 1/2 even if their spacing is extremely large while it will increase by more the shorter the time interval is even if the circles aren't that distant.

On here, we have a table in which I have posted some values of time in seconds and speed star difficulty ratio of some examples (didn't work in more cases because each value was obtained manually, but much more is coming!)

https://docs.google.com/spreadsheets/d/1AbLQBPCuU3WF2OhQVIm88Tm_rE4sq1ew9LrR_sd4GhU/edit#gid=0

Important notes: This may be under-weighting a bit some extremely high bpm maps, so interventions to the original formula will be needed !!

ENORMOUS ADVANTAGES

  1. It will rationalize the way to judge map's speed once this proposal becomes clever, complete and applicable.
  2. Will do a fair nerf on speed difficulty rating value to very jumpy maps that for some reason are over-weighted in speed. This will impair mostly those maps with large jumps with only 1/2 of not so high bpm like Future Sons [OK DAD] and Choco Cookie +DT maps and many other overweighted jumpy maps that only have single tappings or very simplistic rhythms.

PENDING

  1. Finding a way to judge better the sliders (reversals, long ones, etc)
  2. Finding a way how AR and CS can intervene in speed difficulty rating.
  3. Improving the fluidity of the English that is on this proposal.
  4. Finding a way how length can influence speed difficulty rating depending various factors.
  5. Modify the Pioneer formula for better results.
  6. More content to add and debate.

Known comparations in https://osu.ppy.sh/b/207692 (best map for this kind of evaluations)

[Ai no Niwa BPM120]

  • Actual global SSR: 1.86
  • Proposal's SSDR: 2

[SHK - Identity Part 4 BPM 140]

  • Actual global SSR: 2.16
  • Proposal's SSDR: 2.1054

[Cuvelia - Tenkuu no Yoake BPM150]

  • Actual global SSR: 2.31
  • Proposal's SSDR : 2,15443

[sakuzyo - AXION BPM160]

  • Actual global SSR: 2.46
  • Proposal's SSDR : 2,20128

[Eoin O' Broin - Oblivion BPM170]

  • Actual global SSR: 2.61
  • Proposal's SSDR : 2,24622

[sakuzyo - Neurotoxin BPM180]

  • Actual global SSR: 2.61
  • Proposal's SSDR : 2,28942

NOTES:

  1. DO NOT confuse global SSR (Speed Star Rating) with SSDR (Speed Star Difficulty Rating) itself, global one is the final result and it's real Speed Star Rating while SSDR is a value that judges a interval of a map.
  2. Proposal's SSR can become higher than its SSDR value the more time you have to maintain that SSDR (some kind of stamina bonus)

Continuous Difficulty Calcuation

(It's technically not continuous because of the Heaviside step functions involved when adding spacing weights, but whatever)
About 2 days ago someone on reddit mentioned to me that Xexxar was trying to do per hit object difficulty calculation. I was surprised when this hadn't been done before, so I decided to take a crack at it.

I'm going to try to keep this as simple as I can

Using the decay and strains, you can easily create and solve a diffEQ to model strain.

ds/dt = ln(0.15)/1000*s+ 26.25*sum( (D_i^0.99/∆t_i) * Dirac Delta(t-t_i) )
where ∆t_i is the delta time associated with a hit object
t_i is the time associated with the hit object
D_i is the distance associated with the hit object
s is strain
(you can actually use this to create a slider rework but that is a discussion for another time)

From there, I created sets of strains from the hit objects until right before the next one. Note that these sets are continuous along the real number line. I took all the infimums and supremums of those sets and used those to calculate the frequency that a certain strain occurs (this is probably the thing I'm most proud of in this entire thing). The frequency shows how many times the strain function hits a certain strain. It is the amount of elements in the intersection between the collection of strain sets and a certain strain. Then I inverted the homogenous portion of the strain diffEQ to find the differential time at each strain.
ds/dt=ln(0.15)/1000*s => dt=-1000/ln(0.15) * ds/s
you have to add a negative because why would you want negative time?
The strain function spends dt time at strain s.
To get the probability density function, you multiply the frequency of a strain multiplied by the differential time, then divide the whole thing by the total time of the beatmap in milliseconds. You integrate that to get the cumulative density function.
Something like:
CDF(s)=-1000/(T*ln(0.15))*sum(frequency(s)*H(s-x_i)*ln(min(s,x_(i+1))/x_i)
which is the probability that the strain in the beatmap is less than or equal to s
where T = total time of beatmap
s = strain
x_i & x_(i+1) are discrete strains from hit objects
H(x) is the Heaviside step function
After this, you space out the probabilities evenly and add to find your weighted strain or integrate
weighted avg strain= integral(s * 0.9^((1-CDF(s))*N)ds)/integral(0.9^((1-CDF(s))*N)ds)
which is fairly easy because a lot cancels (and a lot is constant) and the exponential of a logarithm is a power.
I already implemented this in R.
Edit: Note that all of the 0.15 can be replaced with 0.3 for speed or any other decay constant
Edit:fixed particular on first EQ
Edit 3: fixed weighted strain integral

Rebalance pp awarded for flashlight plays

Right now, the way FL affects pp is that it gives a multiplier to aim pp 1.45 * length bonus, and also a 1.02 multiplier to acc pp. The problem here is that the FL multiplier uses the length bonus used for normal pp as its base. The length bonus is designed in a way that it doesn't benefit extremely long maps too much. This is done by giving less and less of a bonus the more objects, eventually ending in a logarithmic curve. This is the exact opposite of what we want in a FL system. The current system doesn't use any logarithmic curve, it just stays linear. This gives short maps significantly less of a bonus and long maps significantly more.

My proposed algorithm for a new FL aim multiplier is:

under 500 objects: 1 + objects/900

above 500 objects: 1 + 500/900 + (objects-500)/600

Here is an image showing what this algorithm does to some FL plays, and a graph of a comparison between the old multiplier and the proposed multiplier.

The exact numbers may change, but the main points of this new algorithm is that:

  1. The base multiplier at 0 objects starts at 1, as opposed to the old system which starts at 1.3775. This means that short maps will get a lower bonus.

  2. The whole algorithm is linear. This means that long maps get more of a bonus. The old system had a logarithmic curve, which meant that the more objects, the lesser change in the multiplier.

There are still problems, of course. There is much more to a map's difficulty in terms of FL than just object count. But, this will hopefully fix the under/overweight problems of FL on the majority of maps. This is not perfect, but it seems like a good starting point.

Markov Model Probability of FC

Calculating the joint distribution over possible player cursor positions given the placements and times of notes in a beatmap is prohibitively expensive. It relies on a product of multiple conditional probabilities, where the number of elements conditioned on increases linearly with the number of notes in the beatmap. A Markov Model mitigates the increasing number of parameters of the conditional distributions by instead assuming that only the k most recent previous elements play the largest role in the probability in comparison to those before them. The aim is to approximate the exact joint distribution of cursor positions given the beatmap using a Markov Model, then train the parameters of the conditional distribution used to form this approximation through Maximum Likelihood Estimation on beatmap replay data. Attached is a short writeup going into further detail of a proof of concept:

PP_Rework (2).pdf

HD Rebalance - A Deeper Look

Firstly it’s great to see some changes finally being suggested, and being implemented towards the issue that is the Performance Points system, and more specifically towards the huge impact that the Hidden mod has in its current state.

However there are some major issues regarding the recently merged - and soon to be deployed - HD nerf proposed by Xexxar.

On the surface the change looks pretty good. We see an increase to PP for HD streams, and we see a decrease to PP for HD jumps, and when we check the scores from the top 15 players, we get a result that looks close to what we are aiming for. A nerf to HDDT farming, and a nice buff for underweighted stream maps. The only issue that stands out here, is the fact that the top 15 players are not an accurate representation of the entire osu! Community.

Before we compare some scores I’d like to make clear why HD is being targeted rather than DT directly. HD is considered by most, to not increase the difficulty of a map by much - especially at higher Approach Rates (as the time between the circle disappearing, and the moment you click it decreases) - and therefore is taken by many players when they play higher AR, because it is seen to be “free PP”. “HD can even make reading easier for very high AR maps” - Tom94

Now that we understand the context, let’s take a look at two types of scores; HD/HDHR plays, and HDDT plays, around the same pp value pre-nerf.

Below are some scores from each playstyle before and after the change. If all is going to plan with the re-balance then we should see a decrease in PP for HDDT and HD aim scores and an increase in PP for stream heavy maps.

Comparison of HD, HDHR, and HDDT scores

Now although we do see a decrease to HD aim scores, we also see that HDDT actually gets hit less than HDHR and HD only aim scores. On top of this the increase in PP we should see to stream heavy maps is barely noticeable, with scores such as Dendei - gabe power +HD,HR gaining less than 0.5% PP. In fact compared to the change that aim receives, streams are barely changed at all without looking at extremely spaced streams, the likes of which we really only see from the top handful of players. This means that for the average 4-5 digit osu! player, HD streams are left basically untouched.

If we were to compare scores from slightly lower ranked players, we would see much the same thing, amplified due to how little lower spaced streams are buffed comparatively.

As far as fixing this flaw goes, I am suggesting that instead of hitting all scores containing HD with a flat nerf to the multiplier eg.
((aim*1.18)^(1.1)+(speed)^(1.1)+(acc*1.02)^1.1))^(1/1.1)*1.12
Being changed to
((aim*1.05)^(1.1)+(speed*1.15)^(1.1)+(acc*1.02)^1.1))^(1/1.1)*1.12
We can scale this aim multiplier with the AR of the map. The higher the AR on a map, the easier HD is to play on said map compared to no HD, and hence the less PP should be awarded for playing HD.

I am currently working with a group of people to come up with numbers for this change, however we fully believe that this is a better option than the current nerf.

In implementing this change we should see a similar situation with top players as with the current nerf, we will see a general nerf to HD aim, with the emphasis being on DT and high AR’s, but unlike the current nerf the trend should continue through all rank ranges. As for streams the buff from this will be greater in terms of percentage due to the aim nerf being more prominent in very high AR, while still being a nerf to HD aim overall, as well as lower AR streams being buffed even more.

Slight PP Nerf/Buff based on saturation of specific play

Idea:
A dynamic add-on system that attempts to polish the current pp system and can be used to level out any overweighted/underweighted maps during any meta. The system should (hopefully) assume nothing of the current pp system. Will not scale itself to hard nerf current overweighted jump maps because jump maps. Should have no bias of the type of map. If a 800pp play has been replicated multiple times, no matter what meta/pp system, it will be deemed overweighted and slightly hit. I personally call this Performance Points Polish (PP+) but you can call it whatever.

Goals
What It Will Do:
Nerf overweighted maps slightly
Buff underweighted maps very slightly

What It Will Not Do:
Be a dictating force in the pp meta

Prerequisites
I've added the following prerequisties to prevent abuse:
NF and SO mods do not count when detecting to nerf/buff. NFSO mod combinations are counted as nomod.
All mods will be scaled to Nomod nerf, but relative to nomod accuracy/combo similarity present. The scaling can go as low as 25% of the initial nerf.
Example: NFSO SS will not prevent it from being counted as nomod SS when detecting leaderboards
Example: If Honesty nomod would be nerfed 40pp, then Honesty HD maybe nerfed 32pp (80% change). If Save Me would receive a 20pp nerf, DT Save Me currently would scale down to 4pp nerf. Same for EZHDDT score IF the current EZHDDT scores aren't overweighted. (This is just theoretical base numbers, which will be scaled depending on map, which are discussed later here)

Nerfs

  1. If a difficulty becomes oversaturated in plays, then a small pp deduction will be placed on the entire difficulty
    Example: If the map is played alot, it will receive a nerf. Note: Whether the nerf will be 0pp or 10-50pp will be said in the rest of the points

  2. The deduction will be relative to how extreme the oversaturation is, and scale a bit with its raw pp value (so 2* maps dont get completely cucked, and hitting where its actually meant to go).
    Example: 500k plays may result in 2pp flat nerf to 200pp cap map. The same amount of plays to 1000pp will scale to 10pp nerf
    Example: 10k plays may result in 0.1pp flat nerf to 200pp cap map. The same amount of plays to 1000pp will scale to 1pp nerf

  3. The deduction will experience a minor amplifier based on the quality of players. This value will amplifier can be start by checking the higher quality players, then check if lower quality players have achieved the same play.
    Example: Big Black, without this point, may experience a super slight nerf of like 0.01. But because the majority of the players, by some definition combination of tournament placements and rank, as well as previously acquired PP+, who cannot even reach the full pp count of the map, it will be a negative number, a negative number low enough to warrant a buff (which I will discuss later)

  4. The deduction will then be rescaled and recalculated relative to the potential pp output of the difficulty and actual pp output of the difficulty. A.k.a., it's overweightedness
    This is where real nerfs can take place.
    Example: The 10pp nerf may scale to 20pp if the output for overweightedness is near full SS. A majority of maps that have Overweightness 0 will bring the imaginary 10pp nerf to 0pp.

  5. Rules 2 and 3, and 4 will be recalculated on a weekly, monthly, somethingly cycle.

  6. There can be a hidden secondary cycle placed onto maps that functions similarly to Point 5, but at a slower interval.
    Example: Point 5 may work on a biweekly cycle (every 2 weeks). If a map was once played alot, but has recently seen a lesser amount of plays, an admin can put it on the 2nd queue that works on a bimonthly cycle instead <---- is to prevent some consequential overload.

Buffs

  1. If a map has been sighted as oversaturated, but has a received a negative number reaching a certain threshold, it will instead receive a buff.
  2. The value will be halved, and the negative number will be a positive number
    Example: If Big Black were to be a (-) 20pp nerf, it will instead be a (+) 10pp buff.

???
I wrote this idea a long time ago randomly at midnight, so I don't know if it is really legible or not. Even if there are some inconsistencies in what I actually said, I will be firm believer in this idea having potential until heavily disproven. I've also made a picture of my idea with simple clarification.
DONT CRASH ON ME PLZ
I know this idea needs some heavily modification/improvement, but would like some approval on it's potential. It can theoretically bypass the need for a pp system, but I'm 100% against that belief anyways. Like I said earlier, I intend this to be a sort of polish or some sort, and the values should be much more smaller. An overweighted 800pp map shouldn't never even get hit by 80pp, my idea is that the system is small enough to only hit it by 30 max. (I made the #s bigger in examples for better visualization). Also, obviously the math shouldn't be this simple. Linear scaling saturation and quality of players is obviously a bad thing (look big black) and will overnerf/buff many things. Not every map will ideally be affected. Numbers under a small threshhold (e.g. [+ or -] 1pp) will be nullified for simplicity's sake.

Proposal to work towards a better Taiko SR system.

With the recent qualification of HELIX (https://osu.ppy.sh/b/1584721&m=1), I wonder if it's finally time to have a look at updating/improving the SR algorithm in osu!taiko.

A bit of background:
As far as I know, the SR algortihm is old, dating back to pre-2010. Back then the mapping meta was very different, following a more straightforward "authentic Taiko no Tatsujin style", and the SR algorithm was built in consideration of maps that were available at the time.*

However, with increasing diversity in mapping styles and tastes, the past two years have seen a steady rise in the number of highly SR-inflated maps into osu!taiko's ranked section. As this is a competitive game, it is not surprising to see that these maps saturate the current meta as "must be played" in order for competitive players to maintain their ranking. From what I understand, it is a situation not too dissimilar from osu!. The SR algorithm is simply too dated to accurately judge the current meta, and an overhaul is long overdue.

*I should also add that apparently there was a council of taiko players that decided what was hard and what wasn't, and the story goes that it was decided that "doublets" were hard, and hence the reason why 1/8 doublets are so overweighted today.

What wrong with HELIX (and so many other SR-inflated maps)?
There answer is that there is no problem with HELIX. HELIX [Inner Oni] is the latest and most extreme case of SR inflated maps - it is a 7.50* map that plays like a 5* map. The cause of this inflation is caused by the presence of many consecutive 1/8th doublets, which the game inaccurately deems to be extremely difficult. No one is disputing the map's timing/snaps, nor its spread progression - the mapset is excellent and plays perfectly fine. We should instead focus our energies on fixing the SR algorithm.

What I propose we should do:
I think the majority of the taiko playerbase already knows what is over/underrated with regards to osu!taiko's SR algorithm. I think the right way forward is to create an interactive tool that allows players to play around with the weighting of differnt components that contribute to the SR. In partiuclar, I would suggest addressing these 3 things as a priority:

  1. Look into what makes 1/xth doublets overweighted, and nerf accordingly. Not all doublets are overweighted actually - look at this map for instance (https://osu.ppy.sh/b/1515547).
  2. Look into 1/4th + 1/6th patterns, and nerf/buff accordingly.
  3. Apply buff to raw speed (eg. >300bpm patterns)
    4, TBC....

There are efforts already being made by certain community members in an attempt to recalculate SR (eg. https://osu.ppy.sh/forum/t/656048). From what I have seen, those attempts are an order of magnitude more accurate than what we currently have for the SR algorithm.

Given the success of osu! and osu!mania pp recalculations, I think it's about time we did something for osu!taiko too!

Hopefully this post sheds some more light into the osu!taiko SR situation, and maybe we can hope to see some real changes in the near future? Feel free to poke holes in my proposal/suggestion.

Remove rhythm change bonus from taiko SR

All the current meta farm maps abuse rhythm changes. The maps with hard 1/6 patterns exist (and DT'ing the 1/6 farm maps is only doable for like 10 people), but the difficulty doesn't come from rhythm changes themselves. Something needs to be done to the maps like that (at least acc pp needs to account for this kind of stuff, but also for OD), but in the meanwhile, I propose a simple rhythm change bonus removal.

Accuracy formula change proposal

Introduction

Recently, MBmasher has released a proposal asking for an accuracy change. While addressing some issues, I personally think the change was too harsh.

The proposal, in short, basically renders plays that have an accuracy lower than 85% trash (especially when it goes under 65%) due to how the algorithm is made. The algorithm does reward a small amount of bonus when the accuracy goes over 95%, anything under 90% is quite noticeable nerfed.

An update has been made, and the formula now rewards a bit more for SS scores. The nerfing ceiling is now 95% like MBmasher's 96.15%, though overall nerf is not as steep. Comparing to the old formula, it still gets more rewarding after 99.19%.

While this sounds reasonable when addressing proposed issues, I think it penalizes a lot of new players who want to try harder difficulties for themselves, or those stuck with a play style (eg mouse-only). Since these changes don't award them much, they won't try harder difficulties anymore, making the game not fun :( (though after a few changes, it just makes the curve more fair)

With that said, I have decided to make my own algorithm.

The differences

  • Line is no longer as steep
  • On the high accuracy range (~95% to 100%), there's still a nerf until 96.15% at OD11 (for reference, MBmasher's is 93.69%)
  • Some constants from MBmasher's were used on mine
  • MBmasher's peak bonus is 1.029, while mine is 1.0305 (for reference, originally it is 1.0284) (OD11)

The Maths

Pretty

Graph

Black: mine
Pink: MBmasher's
Green: Original

X-axis: Accuracy
Y-axis: Bonus

LaTeX

\left(\frac{\left(\sin\left(\left(x^1+\frac{o_d^{\frac{1}{2}}}{k}\right)x^2\right)\right)}{\frac{o_s}{f}+0.35}\right)+a\left\{x\le1\right\}\left\{x>0\right\}

lengthBonus as a Summation of Difficulty Instead of Object Count

Hello,

In collaboration with MBmasher, I present a new solution to the length problem.

The Problem
Currently the algorithm addresses length as the total object count of a map. While this idea certainly has merit, it has several problems. For example, a 2 minute stream map that has lots of 1/4th rhythm can be treated as longer than a 5 minute jump map simply due to the object count being more dense. In addition, the difficulty of these objects are not assessed. A map that has a relatively consistent difficulty would get the same length bonus as a map that has difficulty spikes assuming the object count and other factors were equal.

Thus, something new is needed.

The Approach / The Math
In order to properly balance length, a system that assesses the aggregate difficulty of a map respective to the star rating is needed. To make this a bit more visual for those who aren't seeing where I'm going with this, here are some graphs that display the difficulty curves of both speed.cs and aim.cs

http://puu.sh/BNTOR/cf7fa8a2b2.png
http://puu.sh/BNTPh/cd6a05b751.png

With these images, it should be clear what I intend to do. By summing up the area under the curve, we can give reward for length not only in terms of real time, but also balance for the difficulty of the section. Without any more delay, the math for this suggestion is shown below.

Formal: http://puu.sh/BNTWD/e5090554ca.png
Code:

for strain in strains:
            total += strain ^ d
length_bonus = -c + b * ((2 + log10(aim_total + speed_total)) / (2 + log10(stars)) - 1)

Where strain in strains refers to the sum / calculation for both aim_total and speed_total and stars

In essence, this function takes the log of the aggregate difficulty with the log base being the star rating of the map (as I'm using the formula log_x(y) = log (y) / log(x)). This makes it so length is rewarded regardless of the star rating. Some people may have some questions as to why I have 2+log etc. but for the most part these constants are to prevent huge adverse scaling for low SR maps. With my testing that I have done, the following values have been tested and I believe produce satisfactory results.

b= 1.6
c=-1.28
d=1.12

I will say that the only major issue I have with this method currently is the slightly too large buff for normal and hard maps (because these type of maps don't spike at all, so they are buffed in comparison). That being said, the issues are minor compared to the benefits.

Some Results
Ah yes the juicy part that everyone cares about. I'm a bit concerned with the calculator that we developed to do the testing as there are a few anomalies, so some values might be slightly inaccurate. (I apologize for the small sample size). Please note that these values are still subject to change.

Within Temptation - The Unforgiven[Marathon] https://puu.sh/BNUBj/255ac14540.png
Fairy FORE - Vivid[Insane] https://puu.sh/BNUBM/583c3a33b5.png
Will Stetson - Harumachi Clover [Oh no!] +DT http://puu.sh/BNVvK/64b1959c7e.png
Will Stetson - Harumachi Clover [Oh no!] https://puu.sh/BNUFw/d1aac4c924.png
VINXIS - Sidetracked Day[THREE DIMENSIONS] +HR http://puu.sh/BNVB6/c2378d6cb4.png
MuryokuP - Sweet Sweet Cendrillon Drug[Normal] https://puu.sh/BNVbx/39f1d87226.png
MuryokuP - Sweet Sweet Cendrillon Drug[Cendrillon] https://puu.sh/BNVbe/62217919a5.png
Imperial Circus Dead Decadence - Uta[Himei] +HR https://puu.sh/BNUWU/ecdeab1163.png
Yamajet feat. Hiura Masako - Sunglow[Melody] http://puu.sh/BNVFx/3894582b87.png
Yamajet feat. Hiura Masako - Sunglow[Melody] +DT http://puu.sh/BNVGz/238f09cb50.png
GYZE - HONESTY[DISHONEST] +HR https://puu.sh/BNUVB/2ccf2a7da0.png
GYZE - HONESTY[DISHONEST] http://puu.sh/BNVIu/93fc534305.png
Fujijo Seitokai Shikkou-bu - Best FriendS (GoldenWolf)[Extreme] http://puu.sh/BNVRr/df180fdef9.png
Fujijo Seitokai Shikkou-bu - Best FriendS (GoldenWolf)[Extreme] +DT http://puu.sh/BNVSk/042ead7be8.png

The astute among you all might have realize that DT universally causes the lengthBonus to be reduced. This is indeed intended and one of the best features of this algorithm: lengthBonus varies across mods. In fact, DT causes the bonus to lower and HT causes the bonus to raise universally.

Conclusion
While these values aren't all perfect, I believe them to be leaps and bounds better than the current system. In the coming days MBmasher will likely release an updated calculator for those who are interested in testing some values on their own.

With that all said, I hope you all are interested in pursuing this change. Hopefully an official PR can be produced and testing can begin on the official code soon. Thank you for reading and remember to drop a 👍 if you like the change!

Rhythmic Complexity

So I solved abraker's rhythm complexity. What do you all think?

I started out with 4 assumptions

  1. Rhythm complexity decayed with time
  2. If time between objects remained constant for infinity, rhythm complexity converged to a value.
    The value that it converges to is explained later, the assumption is just the idea that it converges to the value.
  3. If the time between objects (delta t) is the same, rhythm complexity will only increase up to a certain limit.
    Rhythmic value does not have a limit (within reason), but the influence speed has on rhythmic value has a cap. Ex. 250 bpm streams and 260 bpm streams do not vary as much in rhythmic complexity as 170 bpm to 180 bpm streams do.
  4. Rhythm complexity increases from delta time based on a harmonic and the value of said rhythmic complexity.
    Basically, this assumption states that changes in delta time (increasing complexity in rhythm) increases rhythm complexity f by a multiple of f and a harmonic with period 2*pi/(interval). See abraker's for more info on this, though I apply it a bit differently.

The first assumption implied that if f(t) is rhythm complexity in terms of t time, then f'(t) is proportional to -af for a positive real number a. This will decay rhythmic complexity by e^(-a) % every millisecond (since time is measured in milliseconds for osu).

The second assumption implies that if time between objects doesn't change convergence to a value c occurs. The convergence from a rhythm complexity f(0) at t=0 to f(infinity)=c at t = infinity would be modeled as
f(t) = c + (f(0)-c)e^(-at).

The third assumption implies that c is proportional to a logistic function of delta t that always decreases (flip the typical logistic function across y-axis).
Example: c(Δt) = k1 + k2/(1+e^(0.1*(Δt-85)))
0.1 and 85 are subject to change and k1 and k2 are to be determined.

The third assumption aside (for now), combining the first two assumptions gives us:
df/dt = -af +sum(g(t-t_i)*u(t-t_i))
where f is rhythm complexity, g(t-t_i) is the function such that f(t) = c(Δt_i) + (f(t_i)-c(Δt_i)e^(-at), u(t) is the Heaviside step function, and t_i is the time of the object I. You will see this function a lot throughout this.
I combined them this way because both the decay and the non-homogenous portion affect how the graph changes and interact with each other. This models it accordingly.

After some Laplace transform manipulation, plug the function in the line above for f, you get
g(t-t_i) = a*sum[c(Δt_i)*(1-u(t-t_(i+1))].
g(t-t_i)*u(t-t_i) = a*c(Δt_i)*(u(t-t_i)-u(t-t_(i+1))) because u(t-t_i)*u(t-t_(i+1)) = u(t-t_(i+1)).
(Δt_i) = t_(i+1) - t_i.
While putting it all together, you add the fourth assumption. This is where it gets tricky.
This implies an increase in rhythmic complexity by a multiplicative amount when a change in delta times occurs because a 1/2 to 1/4 change in rhythm should increase the same percentage at 230 bpm as 170 bpm. If you add a fixed amount to complexity, the impact is less at higher bpms, when in fact the impact should be the same relative to the bpm.
Let p(T_i)=(1+b(1-cos(2pi*T_i)))
where T_i = Δt_(i-1)/Δt_(i-2)
Then
f(t) = [c(t_1-t_0)+(f(t_0)-c(t_1-t_0))*e^(-a(t-t_0))]*(u(t-t_0)-u(t-t_1))
+sum([c(t_(i+1)-t_i)+(p(T_i)*f(t_i)-c(t_(i+1)-t_i))*e^(-a(t-t_i))]*(u(t-t_i)-u(t-t_(i+1))))
for i objects
However, this poses a problem, p(T_i)*f(t_i) exhumes a contradiction. In that section of the equation, this implies that f(t_i) = p(T_i)*f(t_i), which is impossible unless p(t_i) = 1 or f(t_i) = 0.
It's time for difference equations. We now must find f(t_i) in terms of values that are not a function of f at certain points (It's not going to be in that form so that a computer can use).
We have to come up with a difference equation given this recursive relation.

f(t_i) = (c(Δt_(i-1))+(f(t_(i-1))-c(Δt_(i-1)))*e^(-a*Δt_(i-1)))*p(t_(i-1))
for each object
This part is easy to program in a computer. You can replace the Δt_(i-1) in the exponent with t-t_(i-1) to calculate the rhythm complexity at any point in time between objects i and i-1. Use this to calculate the rhythm complexity at the end of a 400ms chunk if the end of the chunk happens before the next object.

This might be all we need for the code, but sometimes referencing previous values of something can be troublesome, so I solved the difference equation just in case. Edit: Replace the m with an i
image
If I end up using this solution in the code instead (which I probably won’t) I’ll group the exponent of tn and ti together to avoid overflow

HD: Fundamentally Flawed and Here Is Why

Before I start I want to make an explanation of who I am and why you should consider my point of view. I am a student who is graduating in less than two weeks with a major in mathematics and have been a member of the osu!community for 5 years. I have been a mapper for over 4 of those years and in my naive phases of mapping I intentionally tried to abuse the algorithm multiple times to make my maps more popular. I am aware of what the flaws of the system are and I fundamentally understand the roles of both speed, accuracy, and aim and how they are measured and calculated in the current system.

With that out of the way, let's begin.

Under the current algorithm, the role of the hidden mod is to give an arbitrarily defined bonus of a approximately 1.18 multiplier to aim and a 1.02 multiplier to accuracy. Based on how aim is calculated, this implies that hidden gets more difficult as objects, in this case jumps, are spaced farther apart from each other. This is rather non-intuitive, as fundamentally HD does not get harder because an object is more spaced than another, it gets harder because of object density among other more complex factors.

While I believe this to be self evident, I believe it's also important to consider the professional scene of players and how they utilize HD. In the top scene of players, we can see that almost all of them use HD for songs that do not have high speed difficulties. Even players that seem to always use HD for top scores will sometimes not use it for songs that are primarily stream heavy. For example, Vaxei (Donkey Kong) uses HD for all his top scores except for DragonForce - Seasons [Legend] +HR. We also see Mathi using HD for almost every aim map, but for his score on VINXIS - Sidetracked Day [Three Dimensions] +HR, he does not use it.

In addition to this, top stream players such as Idke or lain almost exclusively avoid the mod on stream maps due to multiple reasons. One likely reason is the fact that there is little bonus to maps that are stream heavy with HD, but another reason that makes more sense is that it's not worth the bonus they get for the added difficulty. This should make it clear that something is not correct with how the algorithm awards bonus PP for the HD mod.

What can be done is very simple. Change the algorithm to give the bonus to speed instead of aim. This does not have to be an 100% transfer of the 1.18 multiplier, rather it could be a hybrid of both. For example.

Current Algorithm
((aim*1.18)^(1.1)+(speed)^(1.1)+(acc*1.02)^1.1))^(1/1.1)*1.12

Potential Algorithm
((aim*1.05)^(1.1)+(speed*1.15)^(1.1)+(acc*1.02)^1.1))^(1/1.1)*1.12

I am posting this in it's infancy to get your perspective on it and will gladly work to further develop documentation as well as collect data on how this will impact the top 1000 players through your databases provided through your website. Here is an example of what this algorithm would do to some map data I have created using Google Spreadsheets

http://puu.sh/Ae3zh/b0bc1de403.pdf

Bear in mind that the exact workings of the algorithm are subject to change. Personally, since speed values are typically lower than aim, I think a larger speed multiplier may be good thing, such as 1.2.

Angle Assessment as a Function of Object Distance

Hello,

The Problem
I probably don't even need to include this section but... It is well known that one of the largest issues with the PP algorithm is the lack of angles being factored in the difficulty of a map. Due to this flaw, many maps are either underweighted or overweighted. Thus, a new method of assessing this type of difficulty is required.

Required Changes to the Algorithm
Beyond alterations to diffcalc itself, the introduction of a method that returns the acute angle formed between the previous note and the note following the object in question is required. With this method, creating new tools for balancing patterns with special attention to angles can created. Effectively what would need to be done is the following:

            int x1 = prev.X - current.X;
            int y1 = prev.Y - current.Y;
            int x2 = next.X - current.X;
            int y2 = next.Y - current.Y;
            int dotProduct = x1 * x2 + y1 * y2;
            int determinant = x1 * y2 - y1 * x2;
            double angle = Math.Atan2(determinant, dotProduct);

Why Distance is Preferred
In trying to assess angles, it is hard to make general rules without assessing the absolute distance between hitobjects. The reason for this is simple. While it is true that jumps which are linear in nature and typically harder than back and forth jumps, the same is not true for streams, which are typically easier if they are in a straight line. Thus, I propose separating angle buffs for speed and aim which depend on functions of distance. While it may be an over generalization, jumps tend to be more difficult the more linear they are, where as streams are more difficult the more harshly "curved" or non linear they are (smaller interior angles). Some people may argue that time elapsed is better, but as it is hard to differential a 280 bpm alt map with 280 bpm jumps, I do not think this method will work well. Thus, let's go into the specifics of how this might work.

Speed Case
As speed is both intended to balance single tapping speed and stream speed, it can be very difficult to balance the two in a single algorithm without adversely affecting both parts. Thus, what I want to do is introduce scaling based off the angle of the object. A stream of angle 135 or less should start to see a buff for being more difficult to play, up until 90 degrees. Here is what that code would look like.

{
double distance = current.TravelDistance + current.JumpDistance;
double angle = current.(NewAngleMethod)
double angleBonus = 1.0;

if (angle<135)
angleBonus = (135 - angle / 45) * .25 + 1.;
else if (angle=<90)
angleBonus = 1.25;

double speedValue;
            if (distance > single_spacing_threshold)
                speedValue = 2.0;
            else if (distance > stream_spacing_threshold)
                speedValue = 1.6 + 0.4 * (distance - stream_spacing_threshold) / (single_spacing_threshold - stream_spacing_threshold);
            else if (distance > almost_diameter)
                speedValue = 1.2 + 0.4 * (distance - almost_diameter) / (stream_spacing_threshold - almost_diameter);
            else if (distance > almost_diameter / 2)
                speedValue = 0.95 + 0.25 * (distance - almost_diameter / 2) / (almost_diameter / 2);
            else
                speedValue = 0.95;

            return speedValue * angleBonus / current.StrainTime;
}

Which can be visualized with the following image.

Please observe that the speedValues for distance were altered with this proposal. In order to maintain consistent single tapping stamina speed values, we can now lower the extreme scaling that occurs from stream_spacing_threshold and single_spacing_threshold, because most single taps will receive the 1.25x buff that has been added. Thus, a single tap would receive it's standard 2.0x1.25 = 2.5 speed value, while linear streams no longer get the extreme scaling in between the two thresholds that used to exist. This change alone will GREATLY impact the overweighted stream maps that abuse this oversight in the algorithm (This may have actually been my main goal with this proposal) (These values are placeholder, testing will need to be done before anything is finalized).

Aim Case
Utilizing a similar idea as before, we can use this new angle method to rewards wide angle jumps like the ones seen in older maps and other "anti-pp" maps. In this case however, in order to not rewards aim on streams since they're linear, only buffing the portion of distance that exceeds regular streaming distance seems to be the best solution. For example, buffing distance that exceeds the single_spacing_threshold distance. While it is arbitrary, here is an example of how such a buff might work for angles greater than 75.

And here is how the code for such a suggestion may be implemented.

protected override double StrainValueOf(OsuDifficultyHitObject current)
{
           double angle = current.(NewAngleMethod)
           double angleBonus = 1.0
           if (current.JumpDistance > single_spacing_threshold)
                 {
                 if (angle>75)
                        angleBonus = (-75 + angle / 45) / 2;
                 else if (angle > 120)
                        angleBonus = .5;
                return angleBonus * Math.Pow(current.JumpDistance - single_spacing_threshold, 0.99) + Math.Pow(current.TravelDistance, 0.99) + Math.Pow(current.JumpDistance, 0.99)) / current.StrainTime;
                 }
          else
          return Math.Pow(current.TravelDistance, 0.99) + Math.Pow(current.JumpDistance, 0.99)) / current.StrainTime;
}

Thus scaling any distance greater than single_spacing_threshold by the angleBonus (assuming my programming is correct). This would reward angles that exceed 75 with a scale that increases up to .5 for angles that are greater than 120 degrees.

Conclusion
While this solution still does not take into account many factors of angle calculation such as flow, flow changes, weird patterns etc, it is a step in the right direction towards remedying major issues within the algorithm. While I didn't mention it in the initial introduction, the most important change from this will be the rebalancing of streams that exceed the stream_spacing_threshold by nerfing the 1.6 -> 2.5 scaling that occurs between the distances of 110 and 125. I hope some test versions of this suggestion can be implemented so we can see what this might be able to do to improve the algorithm.

As always, thank you for reading and have a wonderful day.

Strain time threshold suggestion

Should we reconsider the 50ms cap in strain time calculation?
50ms corresponds to these limits: 600 bpm 1/2, 400 bpm 1/3, 300 bpm 1/4, 200 bpm 1/6
The limit I want to discuss is 300 bpm 1/4's, which is usually stream. 300 bpm map is rare. But with DT, such occurrence is considerable, as any >200 bpm map that contains 1/4's becomes >300 bpm map. Although such map is too difficult for most players, there is a (very) few can play. Regardless, since we allow many of such maps to be ranked, we should also ensure their playability too.

I understand this cap is to prevent extreme cases. But I think we should raise the cap to include all ranked maps' bpm range including DT. As solution, setting this cap to 20ms (corresponds to 1500 bpm 1/2, 1000 bpm 1/3, 750 bpm 1/4, 500 bpm 1/6) is sufficient.

Touchscreen PP Reworks (Speed, Acc, Flashlight)

Refer to #79 for the Accuracy and Speed Acc changes

Flashlight is one of the most unrewarding mods that you can use on touchscreen for a variety of reasons.

The most obvious reason is due to how FL only boosts aim, while TD puts a massive nerf on aim, negating the effects of the boost.

Secondly, playing FL on touchscreen is considered much more difficult overall due to how the cursor simply teleports around, which means that less overall area would be uncovered due to the little cursor movement.

A Touchscreen FL rebalance (buff) is quite necessary as suggested by many touchscreen players.

Edit: Visual of the proposed bonus

How this will work is:
aimValue *= 1.0f + 0.4f * Math.Min(1.0f, totalHits / 100.0f) + (totalHits > 100 ? 0.4.f * Math.Min(1.0f, (totalHits - 100)/200.0f) + (totalHits > 300 ? (totalHits - 300) / 900.0f : 0.0f) : 0.0f);

In shorthand, aim boost goes from 1x to 1.4x after 100 objects. Aim boost then goes from 1.4x to 1.8x after 300 objects. Past 300 objects, aim is boosted 1x per 900 objects.

osu!catch SR rebalance proposal

I originally wrote about some possible changes to catch SR back on the forums around 2 months ago, now with catch on lazer shaping up we can see how these changes actually look.

I'm a total gitnoob and I'm no programmer either so I've included the code I have below, if anyone can assist with sorting out a PR that would be a huge help.

Movement.cs
CatchDifficultyCalculator.cs
CatchDifficultyHitObject.cs

If anything in this post contradicts the code, then the code is correct and the post is wrong

Edge Dash Bonus

Quick note on what an edge dash is for those unfamiliar with this terminology. An edge dash is when the distance between two notes is very close to triggering a hyperdash, requiring the player to use the edges of the catcher to be able to catch the fruits. The precise movement and timing required makes these dashes fairly tricky.

Edge dashes have been undervalued in SR for some time. Thankfully an edge dash bonus already exists so it can just be increased a fair bit. The bonus is also scaled by strainTime, reducing the bonus at lower ms values where edge dashes are actually easier as the hyperdash generation threshold is more lenient. The proposal also increases the distance at which a dash is considered an edge dash as it seems a little too strict currently.

Live
Bonus = 1.0
Edge dash threshold = 10

New
Bonus = 5.00
Edge dash threshold = 14
Edge dash speed scaling = Bonus * (Math.Min(catchCurrent.StrainTime, 180) / 180)

This gives a significant increase to SR on beatmaps with edge dashes, most notable on converted beatmaps.

Hyperdash Bonus Removal

There currently exists a bonus for hyperdashes and edge dashes on a direction change. This bonus does very little for edge dashes yet has a significant impact on hyperdash heavy beatmaps, most noticeable on Overdose (Extra) level Specifics.

This proposal removes this bonus, leaving edge dashes to be handled solely by the other edge dash bonus.

As for hyperdashes, no bonus seems necessary. Whilst difficult for newer players, hyperdashes are trivial at higher skill levels and the difficulty comes from how the hyperdash is used in a pattern. Exponential distance scaling should handle hyperdashes adequately enough.

Reduce Speed Scaling

SR growth from high BPM values is simply too high. High BPM beatmaps can easily achieve SR values well above 8*, see https://osu.ppy.sh/s/432720 and https://osu.ppy.sh/b/1632808 . At more moderate BPM values stream jumps are problematic too.

Live
StrainTime = StrainTime
StrainTime cap = 25ms

New
WeightedStrainTime = StrainTime + 20
StrainTime cap = 40ms

By increasing the SrainTime cap to 40ms (equivalent to 375BPM streams) it stops the ridiculous growth seen at very low StrainTime values, usually from beatsnaps upwards of 1/8. At these values any significant movement is likely to create a hyperdash which will teleport the catcher without requiring the player to hold dash, also known as a hyperwalk. See https://osu.ppy.sh/b/944502 for an extreme example.

Add Antiflow Bonus

The existing direction change bonus results in flowing patterns being rated more than antiflow patterns.

Quick note on what is meant by flow and antiflow here. Flow is somewhat like when a pattern allows the player to carry some momentum into it. See how in the image below the direction changes are on the notes before and after the jump. Simply put, the movement has a flow to it.

Flow

Antiflow is basically the opposite. Direction changes are made on the notes in the jump and the particularly tricky bit is the direction change at the end of the jump as the player cannot carry any momentum into it. In the context of SR calculation, antiflow will be the strength of a movement before a direction change.

Antiflow

The new antiflow bonus weights a direction change based on the strength of the movement before it.

Live
direction_change_bonus = 12.5

New
direction_change_bonus = 9.5
antiflow_bonus = 25.0

double antiflowBonusFactor = Math.Min(Math.Abs(distanceMoved) / 70, 1);

distanceAddition += (antiflow_bonus / (catchCurrent.StrainTime / 40 + 10)) * (Math.Sqrt(Math.Abs(lastDistanceMoved)) / Math.Sqrt(lastStrainTime + 20)) * antiflowBonusFactor;

The old direction change bonus is only slightly reduced as part of larger balancing efforts. It’s important to still give a base bonus to direction changes as to not completely devalue flow movements.

The antiflow bonus is scaled by a separate bonus factor to the one used for the direction change bonus. This attempts to prevent patterns like this from being too highly rated. Without the bonus factor, this pattern would be the new bread and butter of pp mapping.

The bonus currently uses the square root of the last distance moved as opposed to linear scaling. Linear scaling seems to only really reward beatmaps with large cross screen jumps and leaves others rather underrated in comparison.

Reduce Distance Scaling

As part of balancing all these changes, distance scaling is also reduced a bit as large jumps are rated much higher with the antiflow bonus.

Live
distanceMoved^1.3 / 500

New
distanceMoved^1.3 / 600

Increase Base Bonus For Every Movement

There’s an existing base bonus for every movement which, as the comment in the code says, gives weight to streams. Increasing this helps give a bit of weight to smaller movements in beatmaps which aren’t rated highly due to their lack of distance.

Live = 7.5
New = 10.0

Remove SR inflation caused by wide buzz sliders

This fixes the issue outlined in #82

The implementation is pretty much as proposed but now scales between 80ms and 60ms.

Near enough fixes the issue without impacting innocent beatmaps. SR growth from these stacks can still happen but it is far less extreme.

if (Math.Abs(distanceMoved) > 0.1)
{
	if (Math.Abs(lastDistanceMoved) > 0.1 && Math.Sign(distanceMoved) != Math.Sign(lastDistanceMoved))
	{
		if (Math.Abs(distanceMoved) <= (CatcherArea.CATCHER_SIZE) && Math.Abs(lastDistanceMoved) == Math.Abs(distanceMoved))
		{
			if (catchCurrent.StrainTime <= 80 && lastStrainTime == catchCurrent.StrainTime)
			{
				distanceAddition *= Math.Max(((catchCurrent.StrainTime / 80) - 0.75) * 4, 0);
			}
		}
	}	
}

Adjust Star Scaling Factor

Final change is a slight adjustment to the star scaling factor. This is to keep overall numbers hitting similar ranges as before.

Live = 0.145
New = 0.15

Check out this spreadsheet to see an example of all these changes on a few beatmaps.

These changes certainly don’t “fix” catch SR, beatmaps are still over and underrated but most see improvements in the right direction. Converts and specifics seem much better matched and SR values at the high end look to be more sane, it’s much rarer to see a beatmap above 8 stars.

On the lower end things might look more familiar, Cups and Salads are at very similar SR values compared to live. Most Platters see a slight decrease and most Rains see a varying decrease due to flow and speed changes. Overall there doesn’t appear to be any adverse effects on lower difficulties.

Statistical osu SR & performance calc

I'm proposing a new SR & difficulty calculation which aims to:

  • properly incorporate diff spikes and length into SR - SR now represents difficulty to FC, not peak difficulty.
  • give better judgements of imperfect plays by assessing the difficulty to get a given combo and miss count, rather than a flat punishment for every map

This issue is the same as the bottom half of #64 which has become a bit hard to follow.

There's a draft writeup of the methodology here.
The code is at PR ppy/osu#4773 (I've also got an osu-tools branch here with a few improvements)

A few results:

freddie benson
walkingtuna
idke
nathan on osu
karthy

me (~50k)
starbin1 (~100k)

Todo:

  • tweak parameters for best results. The most important are StarBonusPerLengthDouble and StarBonusBaseTime in osu.Game.Rulesets.Osu/Difficulty/OsuSkill.cs (they can also be overriden separately in Aim.cs and Speed.cs)
  • make abraker95's suggested changes to the write-up
  • possibly refactor to simplify the implementation - it's currently transforming strain using log(x) before doing calculations, then transforming back at the end. Following the log(x) through all the calculations simplifies some stuff, adds more concrete meaning to the parameters, and makes the implementation match the writeup more closely.
  • Add SR multiplier to control SR inflation

[osu!std] Some old maps award 0 acc pp because of 0 hit objects

List of affected maps:
https://osu.ppy.sh/b/97
https://osu.ppy.sh/b/107
https://osu.ppy.sh/b/159
https://osu.ppy.sh/b/161
https://osu.ppy.sh/b/183
https://osu.ppy.sh/b/207
https://osu.ppy.sh/b/238
https://osu.ppy.sh/b/250
https://osu.ppy.sh/b/305
https://osu.ppy.sh/b/369

According to the sql data dump "2018_05_17_performance_osu_top.tar.bz2" the respective countTotal, countNormal, countSlider and countSpinner fields of these maps all equal 0.

I'm not sure whether this is intended behavior or a bug that's either been overlooked or disregarded, so I wanted to report it somewhere.

A potential fix for AR

Reworking from scratch, focusing on high AR first, then low AR (since low AR is more complex in my opinion, because many things attribute to reading, while high AR is mainly dependent on reaction).

Simple buff for low AR

Background

Most would agree that low AR is very underweighted with the current PP algorithm. To address this, I'm proposing a simple buff to aim PP based on AR.

Calculation Proposal

For AR less than 7:
Multiplier = 1.6

For AR between 7 and 10:
Multiplier = 1 + (10 - AR)/5

This factor is multiplied by aim PP to give the buffed aim PP. This translates to a 20% buff for AR9, a 40% buff for AR8, and a 60% buff for AR7 and below. These values may seem a bit extreme, but remember this is only being applied to aim PP. After total PP is calculated, the buff turns out to be very modest.

The rationale for the constant bonus below AR7 is that plays in this range are far more likely to be memorized as opposed to sightread (not including easy/medium difficulties). So, there should not be any additional AR reading bonus.

Some Results

AR7.5
-GN | Ryoushi no Umi no Lindwurm [Death of the Quantum Sea] 96.49% FC
314pp -> 401pp

AR8
chocomint | Yakumo >>JOINT STRUGGLE (2014 ReWorks) [Phantasm] +HD 98.98% FC
354pp -> 392pp
Mismagius | C-TYPE [SS-TYPE] 100% FC
232pp -> 298pp
WubWoofWolf | Cry for Eternity [Legend] 99.33% FC
361pp -> 418pp

AR9
Vaxei | Rainbow Dash Likes Girls (Stay Gay Pony Girl) [Holy Shit! It's Rainbow Dash!!] 99.77% FC
487pp -> 546pp
chocomint | Kami no Kotoba [Voice of God] +HD 99.86% FC
460pp -> 511pp
Rafis | FREEDOM DiVE [FOUR DIMENSIONS] 99.93% FC
586pp -> 627pp

EZ/HT Scores
Riviclia | Koi no Hime Hime Pettanko [Taeyang's Ultra Princess] +EZ,HD,DT 98.13% FC
485pp -> 644pp
-GN | The Promethean Kings [The Merciless] +EZ,HT 98.68% FC
397pp -> 520pp
Cappy | Those Who from the Heavens Came [Fengshen Yanyi] +HT 99.75% FC
395pp -> 429pp

Notes

  1. The low AR buff shouldn't apply to FL scores, since AR is practically irrelevant in this case.
  2. Some may have concerns about easy/medium maps receiving buffs since they naturally have low AR. This shouldn't be a major issue, since the buffs will be minimal (i.e. 40pp -> 44pp).
  3. HD already has a low AR buff in the current algorithm. This buff will be applied on top of the one that already exists, giving HD plays two separate buffs.

Does not compile on OS X

cmake works successfully, but make does not.

[  4%] Generating PrecompiledHeader.h.gch
/bin/sh: clang++-3.5: command not found
make[2]: *** [Src/Processor/PrecompiledHeader.h.gch] Error 127
make[1]: *** [Src/Processor/CMakeFiles/Client_PrecompiledHeaderDependency.dir/all] Error 2
make: *** [all] Error 2

OS X has clang and clang++ when you install Xcode, but clang++-3.5... well, bash says not found.

$ clang++ --version
Apple LLVM version 7.3.0 (clang-703.0.31)
Target: x86_64-apple-darwin15.5.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

I couldn't find the word "clang" in any of the Makefiles, so I don't know how to fix it.

Stop score overwrite if pp of the play is lower

This one is self explanatory, stopping the score from being internally overwritten if the pp from the play is lower should be a given. You achieved the score previously, therefore it should count to your pp. You can still keep the pp from the previous score though if an "internal" mod like Touch Device was created. For the sake of the this proposal we will call it "TopPP". Upon a score overwrite a simple if statement could check if the pp is higher or lower. If it is lower, take the old performance, add the "TopPP" mod to it and then save it with that mod combo and save the new score with the mod combo it was submitted with. The "TopPP" score wouldn't need to show up on leaderboards if you want things to be consistent and you could easily show on the website it is no longer their top score on that map with that mod combo by making said mod show up with an icon on the site. This would allow for the best of both worlds with this system.

This proposal would accomplish 2 things. It would no longer discourage players from trying to improve a performance on a map they enjoy, and it would also no longer confuse a new player for losing rank or pp from improving their score.

Few but High OD circles blatantly overrated.

Actually I've seen that in more recent times mappers are giving very high OD to very short maps that in some cases are only 81 circle count like in the case of Will Stetson - Harumachi Clover's [Oh no!] which is OD9 but not too hard to add DT mod to make it worth as OD10,4.

The point is, that OD is one of the reasons why those very short maps gives an excellent reward even with not so high accuracy at all. You can even try some simulations at those same maps with Doubletime but by making their OD be OD6 , OD7 (without DT) instead of their base OD8 to 9's. You will see a large pp difference just because of decreasing the OD.

The idea I want to propose is to decrease pp effectiveness for 300's (from acc) in maps with OD higher than 9 but less than 300 circle count; affecting specially the more short maps. This focus would make reasonable that Future's Son [Insane] is affected a lot less than it's [OK DAD] difficulty if played with double-time or HR. While not affecting most of newbies that are getting used to osu! which commonly will play shorter maps.

While I've talked with some friends about the previously mentioned nerf to all those maps with less than 500 clickable elements is that they argue that this "could" affect the motivation to newer players (point in which I'm not completely agree or disagree with my friends).

This idea can even be applicable in the already in mind idea of nerfing too short maps. If the pp reward from doing higher accuracy is decreased in a considerable way in too very short maps is done, this will impair at most players that cannot be considered newbies because of being capable to even SS OD10,4 but only in very very very short maps; players that we cannot say that their accuracy is necessarily consistent at longer maps.

The minimum value of OD which are being affected from this suggestion not necessarily has to be OD9 nor the circle count has to be 300 exactly, both values can be adjusted depending in what you consider the best. But this focus unlike the elements count this targets directly only hit circles and not sliders or spinners.

Any on going idea can be posted here gradually, ofc I have a lot more to propose and even to explain, but at the moment I will leave this topic at this point.

Acc based on Statistical Accuracy

I'm going to be off/away for a while, so I wanted to get this out there.
So I wrote up a statistically accurate accuracy pp type thing.
Mad props to Full Tablet for the inspiration.
I discussed this a long time ago (before I knew this discord even existed), but never got around to doing anything about it:
https://osu.ppy.sh/community/forums/topics/727540?start=6741333
(Ignore my ignorance in this post)

precision = 0.01;
p = 1.645; //based on z-score values
double sGuess(acc,OD)
{
     //approximate mean and standard deviation for OD slope
     u1 = -6.635;
     o1 = 1/0.518926;
     // approximate mean and standard deviation OD intercept
     u2 = 88.485;
     o2 = 1/0.0386966;
     return (u1 + erfinv(2*acc – 1)*o1*Math.sqrt(2))*OD +  (u2 - erfinv(2*acc – 1)*o2*Math.sqrt(2));
}
double E(s,MS)
{
     return erf(MS/(Math.sqrt(2)*s));
}
double E300 = E(s,79.5-6*OD); 
double E100 = E(s,139.5-8*OD); 
double E50 = E(s,199.5-10*OD);
double mean(s,OD)
{
     return (2/3)*E300(s,OD) + (1/6)*E100(s,OD) +(1/6)*E50(s,OD);
}
double Ex2(s,OD)
{
     return (8/9)*E300(s,OD) + (1/12)*E100(s,OD) +(1/36)*E50(s,OD);
}
double sd(s,OD)
{
     return Math.sqrt(Ex2(s,OD) – Math.pow(mean(s,OD),2));
}
double f(acc,num_circles,s,OD)
{
     return acc - mean(s,OD) – Math.sqrt(2/num_circles) * p * sd(s,OD);
}

double bijectionS(acc,num_circles,OD)
{
//initial guesses
double s0;
double s1;
s0 = sGuess(acc,OD);
s1 = sGuess(acc – Math.sqrt(2/(num_circles-1))*p,OD);
//approximation

while(abs(s1-s0) > precision)
{
     if(f(acc,num_circles,(s0+s1)/2,OD)* f(acc,num_circles,s0,OD) > 0)
     {
          s0 = (s0+s1)/2;
     }
     else if(f(acc,num_circles,(s0+s1)/2,OD)* f(acc,num_circles,s1,OD) > 0)
     {
          s1 = (s0+s1)/2;
     }
}
return 0.5 * (s0 + s1);
}

The hard part is finding the function for the acc pp in terms of s.
Ideally we could just directly convert the OD part and be done with it.
2.83*1.52163^OD => a*1.52163^(13.25-s/6) for some constant a.
Although warranted, this could lead to some big changes for decent accuracies.
I don't know how the error function and error inverse function are established in osu, so I just labeled them as erf and erfinv respectively.
I shorthanded some stuff here for clarity, but for actual coding some stuff will have to change, e.g. E300, E100, & E50 might have to be defined like E is instead of shorthanded like it is. I have to fix the inputs of the functions (putting their data type before and such, but that can easily be done later).

Low AR buff based on Star Rating

(original idea proposed by Nymfaye)

The Problem
Currently, most of the playerbase can agree that EZ/low AR is underweighted due to object density/reading not being taken into account into the system.

The Basic Approach
Nymfaye proposed that low AR could be buffed based on Star Rating. One could say that this doesn't address the real issue with low AR, but my argument to that is that Star Rating is very closely linked to a map's object density.
I built up my own idea from Nymfaye's idea. I created a low AR threshold, which is based off a map's star rating. Then, I created a bonus which was based on how much lower the AR of the map was compared to the low AR threshold.
Here's (badly) drawn graphs showing my idea.
For aim pp: the bonus applies whether hidden is enabled or not. However, hidden gives an extra 30% bonus.
For speed pp: the bonus applies only when hidden is enabled.

Juicy Math
Formulas
Formula for Low AR Threshold Calculation

  • StarRating refers to the separate aim/speed star rating, not the total star rating.

Formula for Bonus Calculation

  • Enabling Hidden gives a 30% boost to the aim bonus.
  • This bonus is only given to speed if Hidden is enabled.

Values
Aim Values:

ARThresholdCeiling = 9.2
LowARThresholdCurveIntensity = 10
StarRatingThreshold = 2
BonusCeiling = 0.25
BonusCurveIntensity = 2

Speed Values:

ARThresholdCeiling = 9.2
LowARThresholdCurveIntensity = 7
StarRatingThreshold = 1.25
BonusCeiling = 0.12
BonusCurveIntensity = 3

Explanation
Explanation of Values

Results
HONESTY +EZ 345pp -> 375pp
HONESTY +EZHD 404pp -> 468pp
SHIORI Apex +EZDT 363pp -> 390pp
SHIORI Apex +EZHDDT 406pp -> 452pp
(as you can tell, low ar stream maps gain a LOT more by adding hidden than low ar aim maps)
Riviclia's top play: Hime Hime +EZHDDT 450pp -> 503pp
-GN's top EZ play: Mekadesu +EZHDDT 422pp -> 473pp
Ekoro's top EZ play: IGNITE +EZDTFL 431pp -> 447pp
Ekoro's second top EZ play: Daisuki Evolution +EZFL 410pp -> 465pp
exc's top play: d.m.c +EZHDDT 382pp -> 414pp

Conclusion
This may be considered a "bandaid fix" and that is correct, but we think that this change can possibly make the values for low AR plays a little more reasonable. We hope to get opinions from many other people, especially players with experience in this sort of stuff. Thank you for reading!

OD/AR inaccuracies in beatmap difficulty attributes table

After messing around some, I noticed that, with DT specifically, the table is not giving what should be the correct values for OD and AR.

http://puu.sh/os5ex/b5097957f2.png

This table is saying OD 8.5 +DT is OD 10.11, and AR 9.3 (labeled as 9.31?) is AR 10.54.

Using http://w.ppy.sh/8/88/ODTable.png for OD and https://i.ppy.sh/de463a31f71b68ea457cec732efad1e2e21513c7/687474703a2f2f772e7070792e73682f362f36382f41525461626c652e706e67 for AR as references, the DT OD for 8.5 should be 10.0833 (19ms) and the DT AR for 9.3 should be 10.5333 (370ms)

For the AR, this inaccuracy starts at the nomod value (9.31), which would be 10.54 with DT, as stated in the table.

For OD, the 8.5 is accurate for nomod but 10.11 is not accurate for DT.

Is this an error or is there a hidden explanation for this?

Will it be helpful to decouple aim, speed and accuracy pp?

As title.

As any weighting of a specific skill basis set will be under/overweighted in some ways, is it just better to decouple them, and find a "minimum orthogonal basis set" with like 3 to 5 dimensions to describe the performance, with separate leaderboards?

While I think (aim, speed, accuracy) is small and orthogonal enough as a skill basis, and changes can be rolled out quickly while solving some "underweighted" or "overweighted" problems. The overall ranking may be then substituted by ordering the sum of the three dimension rankings, which I considered as a community-scale average rather than a personal-scale average, favoring all-rounded players. So that there will be less flaming in the community on "overweighted" maps.

If it (aim, speed, accuracy) is not suitable, I think Syrin's documentation is great, but it will need the reworking of the whole system and fitting it with suitable parameters.

Osu!Catch

Could something like this be implemented in Catch too? This may partially fix broken maps like Image Material

Edit: I meant to comment on Xexxar's post, how do I delete this? ;-;

EDIT 2: After a quick google search I noticed it's impossible to delete this myself. I feel embarrassed, could a mod please delete this?

Touchscreen SR rebalance based on angles and streams

As many of you may know, touchscreen, both before and after the nerf, has always been extremely unbalanced. This touchscreen rebalance should hope to help fix extremely underweighted maps with touchscreen.

What will be added:

Aim Buff: Rebalance touchscreen aim strain based off of angle, angles under 15 degrees get a slight nerf while over 15 degrees gets a buff upto around 100 degrees, where the buff is nearly the same until 180 degrees. This, for the most part, only affects aim, however has a very small effect on speed.

Stream Buff: Streams will get a quite significant buff if:

  • They are over 165bpm in streaming speed (330bpm singletaps)
  • They are over 100 degrees in angle
  • They are over 70 pixels apart (Buff until 180 pixels)

Here is a Google Doc of many example values:

Much more information can be found here

Decide on Doxygen comment format standard

If we're not going to be using Doxygen or other form of automatic documentation, then feel free to just close this.

Coming from a corporate setting I've always been told that comprehensively documenting your code is not an optional thing. Doxygen makes it really easy to just whip up a PDF of all the methods and classes so people can reference stuff quickly, but if we start doing it we should stick to one comment standard.

Personally I don't really have a preference to which style is used (some examples from the Doxygen manual), but the easiest one IMO is the /// block because Visual studio automagically fills in the necessary stuff when you type the ///. Thoughts?

general remarks

I don't feel confident enough in writing a PR since there are no tests for me to run & check if I broke stuff.

Src/Shared/Config

Src/Shared/Core/Active

  • if you can "afford" std::make_unique, use it (no big deal anyways).
  • foo == false is better expressed as !foo.
  • any reason to catch (...) and assigning the exception, inside of using your destructor with std::uncaught_exception(s) (the behavior of either is very different, and the second one is C++17 only)? The problem is that this is dangerous... on windows, you can catch a lot more than you might first except if SEH exceptions are enabled - that includes access violations...
  • don't use NULL; use nullptr
  • any code after std::rethrow_exception is dead code /!\

Src/Shared/Core/Exception

  • you can default your destructors.

Src/Shared/Core/Logger

  • don't take a std::string&& to forward it. Either do that, and std::move, or just do it the "simple" way: take it by value (and then move it).
  • CLog::Log takes the text by rvalue reference, but then copies the value inside the lambda. If you enabled C++14, take it by value and use [Text=std::move(Text)]() { ... } to move it inside the lambda.
  • auto a = std::string{}; really is just std::string a;
  • why do you convert your string to a c_str to call Write?
  • sizeof(char) is 1.

Src/Shared/Core/StringUtil

  • If you take your param by non-const ref, don't return it as well. (especially since your 2nd overload doesn't use the return value). That's true for split's first overload, and for the trim functions

Src/Shared/Core/Threading

  • That's some pretty... hairy code? I mean, mutexes named L N M makes it pretty hard (for me) to reason through it. I'm not sure you couldn't use some C++11/C++14/Boost mutex type? (C++17 and Boost both have a shared mutex impl (i.e. multi reader, only one writer)).
  • When you know the size you need, you can .reserve enough space.
  • it doesn't make sense to move the return of a function. It's already a rvalue -- the compiler will do the moving for you. (yeah, the rules are really complex...).
  • you don't need a new scope for that last unique_ptr in "StartThreads" since RAII works LIFO (last in, first out).

Src/Shared/Network/DatabaseConnection.

  • you could move all your string parameters in the constructor
  • instead of using new MYSQL, use a unique_ptr with a custom deleter (that's gonna be mysql_close). You can remove all the delete this way (and use .reset() where needed).
  • use nullptr instead of NULL.
  • while (a != 0) is while (a)

Src/Shared/Network/QueryResult

  • use nullptr instead of NULL.

That's pretty much all I saw for Src/Shared. I'll try to look at other files tomorrow :)

Statistical analysis of imbalance between speed star rating and aim star rating: aim is overweighted.

Introduction

I used my own dataset from https://grumd.github.io/osu-pps/ to see if I can find anything interesting.

This dataset relies on one assumption: if a map is overweighted (easier to get PP) then it will more often be one of the top pp plays for many different players. So I gathered statistics of tens of thousands of players and aggregated their top plays to see which maps are the most popular PP sources. Additionally, maps that are often a top 1 play receive more points than maps that are a top 5 play.

Dataset

I ended up with a dataset of ~75000 maps. I sorted it from most overweighted to least overweighted.
I took 1000 maps from the dataset (every 75th map).
I used oppai on all of them. I calculated star rating: speed stars and aim stars. I also calculated pp, including aim pp and speed pp. I then divided aim values by speed values to get a ratio. If this ratio is higher than 1, it means this map is more aim-based, gives more pp for aim, has more difficulty for aim.

I made a scatter plot showing how these ratios between aim difficulty and speed difficulty correlate with overweightness of a map.

Results

Star rating ratio:
(left - more overweighted; right - less overweighted)
(Y axis = aim stars / speed stars)

PP values ratio:
(left - more overweighted; right - less overweighted)
(Y axis = aim pp / speed pp)

The website I made a scatter plot on builds a trend line automatically. It shows that on average most overweighted maps have 5% more aim stars than speed stars, and least overweighted maps have 5% less speed difficulty than aim difficulty. In terms of pp - most overweighted stuff has 25% more pp for aim than for speed, least overweighted stuff has 5% less pp for speed than aim.

Conclusion

On average, best pp plays of most players have more aim difficulty than speed difficulty, by 5%. A lot rarer are plays that have 5% more speed difficulty. This in turn makes it so most overweighted maps give 25% more pp for aim than for speed. 25% makes a 600pp play into a 750pp play.
Keep in mind that on average all maps have 1:1 ratio of aim to speed difficulty (star rating).

What should we do? We should just buff speed star difficulty by 5%. Or nerf aim difficulty by 5%.
This would make it so on average ALL maps are 1:1 ratio between aim diff and speed diff. Even currently overweighted maps and currently underweighted maps.

I would love to implement this change locally and show you guys how it would affect some maps and some players, but I don't know how to do that. For now I'm only starting a discussion.

osu!catch - Remove SR inflation caused by wide buzz sliders

https://puu.sh/CLXGg/34a2ad34c6.png
The pattern above is caused by standard mode converts having wide buzz sliders. The player can just stand in the center to catch everything. One or two of these buzz sliders can raise a map's SR from 4-5* to 7-11* on its own.

Sing's Master on this map just before kiai is an example and can be FCd.
While unplayable, Kaede is a bit silly to be listed as 27*

This issue needs fixing sooner rather than later because mappers for standard mode have had their maps DQd to change patterns to avoid causing this issue.

Sorcerer's proposed fix suggests the following:

  • Check for direction change
  • Check if distance of movement is <= catcher size
  • Check if distance of current movement is equal to distance of previous movement
  • Scale overall SR addition by time, 1.00x at 100ms down to 0.00x at 50ms

Hitoffsets in score data

currently score data contains

maxCombo
num300
num100
num50
numMiss
numGeki
numKatu

Which is pretty limited info to deal with, and forces us to assume that the player breaks combo at whatever the calculated hardest parts of the map are. Having information relating to how accurately the player hit each note would allow to properly calculate performance of a play.

Hitoffsets is also valuable data I believe is needed for pp development. It's currently only available in replays, which are too expensive to access and are only available in top 500 scores. With score data containing an array of hitoffsets, it would finally be possible to analyze the actual difficulty of each note across a wide spectrum of players and maps.

I imagine you would only need an array of signed 16 bit values for this (32,768 is plenty enough for hit offset).

The high AR Bonus is faulty (With possible solutions)

Problem

Currently, scores done on high AR (10.33 to 11, only achievable by adding DT mod to an AR9+ map) receive a bonus multiplier to their aimvalue and speedvalue for being able to read/react to it. This multiplier starts at 1.0x for AR10.33 and increases linearly up to 1.2x at AR11.

The reasoning for this reward is not quite clear, due to there being multiple ways to tackle high AR maps which vary in difficulty depending on the map.

One way a player can tackle a high AR map is by just being able to read/react to it using their raw reaction time. For these kinds of players, the reward makes (some) sense. However, there is another way players can tackle high AR maps, and that is by memorisation. For these players, memorising a 1000 object map on AR11 for a 1.2x bonus multiplier may not be worth it, which is understandable. However, the bonus does not change depending on length; Players who memorise high AR maps get an equal reward for memorising a 100 object map as a 1000 object map. Thus, there is no reason for players who can’t read high AR to play anything other than the shortest of maps. In fact, these players are being vastly over rewarded for memorising or partially memorising short maps with high AR.

Think about all the recent high pp scores that were AR11 or close. They are all almost exclusively short maps (I haven’t had a thorough check but I’d be willing to bet that all are under 500 objects). Think about all those high pp scores on very short maps. The vast majority have at least DT,HR plus other mods. A good example of a recent offender is: https://osu.ppy.sh/b/1893461?m=0
many players have made their top play or a top 5 play on this sort of map.

Chart of Maps with AR11 scores over 600pp (Slightly Old): https://i.imgur.com/j7EMetT.png

The issue gets even more complicated when you consider the fact that many times players are using a combination of both methods – their reaction time AND some loose memorisation – in order to get the high AR bonus.

The solution(s)

I’m partially of a mind to scrap the AR bonus completely, mainly due to the fact that it’s too simplistic of a fix for a complex subject. There may even be more factors affecting difficulty of high AR that haven’t been considered. However, after running the numbers, this is far too harsh and completely ruins all incentive to play high AR(for now).

A more compromising solution would be to assume the player is memorising shorter maps. With memorisation, the difficulty is negligible at 1 object, and linearly increases as the amount of objects in the map increases. For longer maps the player will be assumed to be reading/reacting the map, and so the multiplier will stop scaling based on number of objects and remain constant, as we don’t want to over reward players who are not doing any memorisation and instead just reading. The point where short map becomes long map is unclear though. I’ve decided to place it at 500 objects.

This makes:

	if (approachRate > 10.33f)
		approachRateFactor += 0.3f * (approachRate – 10.33f);

Become:

	if (approachRate > 10.33f)
		approachRateFactor += 0.3f * (approachRate - 10.33f) 
		* std::min(1.0f, static_cast<f32>(numTotalHits) / 500.0f);

Top 10 Scores before and after: https://i.imgur.com/nKc4EjB.png

Longer maps such as Best Friends and caffeine fighter remain unaffected by this change.

This is from an old database dump, so I’ll need an up to date database dump in order to properly test things.

Conclusion

The problem I am quite sure about. The solution I am still unsure of, but would like to continue exploring possibilities.

A weighting system to promote polyvalence

Many players, including me, have many similar scores in their top 50. As a
result a player's total pp is no more than a partial representation of the
player's potential. My goal is to make a better representation by nerfing
redundant performances
, hopefully encouraging players to diversify their
skillset and ultimately make them enjoy more facets of the game.

Currently the weight given to a play is purely based on how many better plays
have come before. We can improve this by taking the same system but basing the
weights on the similarities between the plays.

If a player's 8 best performances are very similar then their contributions
should be diminished one by one. But if the 9th and 10th plays are from a
different kind then they should count for more, maybe even in full again if
they are completely different. Going down the line of all played scores, all
plays will eventually be similar to enough scores to stop being significant to
the total.

The key new element here is defining when maps are similar. Maps can be
different in many ways, so they can also be similar in many ways. There are the
basic map metrics: AR, BPM, Combo/Length, OD, etc. There are emergent aspects like
aim, speed, accuracy, object density or how technical a map is to play. Though
for that last one we don't have an algorithm yet. The frequency of difficulty
spikes... I'm sure the list of things can be expanded even more once we start
thinking about it. The similarity between two maps is a combination of all
these things.

The weights for a given play could be based on the overall similarity between every maps
e.g

play #3 is
- 80% similar to play1 : weight 1
- 90% similar to play2 : weight 2

total weight : weight 1 * weight 2

or the weights could be composed independently per aspect.

play #3 compared to play #1: 
- 100% similar in aspect a (same AR for example) : weight 1a
- 95% similar in aspect b (200bpm vs 210bpm) : weight 1b
- not close enough in aspect C : no weight
play #3 compared to play #2: 
- 100% aspect a : weight 2a
- not close enough for aspect b : no weight
- 80% similar in aspect 2c (300 hit objects vs 375 hit objects) : weight 2c

total weight: 1a * 1b * 2a * 2c

Note this doesn't prevent making individual rankings for individual aspects,
but it combines the plays into the "final metric" differently than simply
adding multiple rankings on top of eachother.

I'd be interested to make a concrete implementation and test it against the data that ppy provided.
As this is an idea that I haven't seen before (maybe I've missed it?) I would like to hear other people's thoughts on it first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.