Coder Social home page Coder Social logo

ullmark / hashids.net Goto Github PK

View Code? Open in Web Editor NEW
3.3K 36.0 165.0 878 KB

A small .NET package to generate YouTube-like hashes from one or many numbers. Use hashids when you do not want to expose your database ids to the user.

Home Page: http://www.hashids.org/net/

License: MIT License

C# 99.55% Shell 0.45%
hashids decoding encoding c-sharp

hashids.net's Introduction

Hashids

A small .NET package to generate YouTube-like IDs from numbers.

It converts numbers like 347 into strings like yr8, or array of numbers like [27, 986] into 3kTMd. You can also decode those IDs back. This is useful in bundling several parameters into one, hiding actual IDs, or simply using them as short string IDs.

http://www.hashids.org/net/

NOTE: You might want to use sqids-dotnet instead.

The original author of the Hashids algoritm has rebranded and created a new algoritm called "sqids", you can read more about why here.. A .NET version of sqids has already been created and can be found here.

This library have more or less been considered feature complete since its creation 2012 and will remain available and accept pull requests for bug fixes etc. but no new features will be added.

Features

  • Creates short unique IDs from integers. (only positive numbers & zero)
  • Generates non-sequential IDs for incremental input to stay unguessable.
  • Supports a single number or array of numbers. (supports int and long)
  • Supports custom alphabet and salt โ€” so IDs are unique to your application. (salt must be smaller than alphabet)
  • Supports minimum hash length.
  • Tries to avoid basic English curse words.

Notes

  • This is NOT a true cryptographic hash, since it is reversible.
  • Only zero and positive integers are supported. Negative numbers will not be encoded.
  • Only a minimum hash length can be specified. There is no way to fit arbitrary numbers within a maximum hash length.
  • The alphabet must contain at least 16 unique characters and is case-sensitive.
  • Separators are characters used to encode multiple numbers in a hash and must also be in the alphabet.
  • The salt must be smaller than the available alphabet and is limited to the length of the alphabet - separators - 1.

Installation

Install the package with NuGet

Install-Package hashids.net

Usage

Import namespace

using HashidsNet;

Encoding one number

You can pass a unique salt value so your hashes differ from everyone else's. I use "this is my salt" as an example.

var hashids = new Hashids("this is my salt");
var hash = hashids.Encode(12345);

hash is now going to be:

NkK9

If your id is stored as a Int64 you need to use "EncodeLong".

var hashids = new Hashids("this is my salt");
var hash = hashids.EncodeLong(666555444333222L);

hash is now going to be:

KVO9yy1oO5j

Decoding

Notice during decoding, same salt value is used:

var hashids = new Hashids("this is my salt");
numbers = hashids.Decode("NkK9");

numbers is now going to be:

[ 12345 ]
var hashids = new Hashids("this is my salt");
numbers = hashids.DecodeLong("KVO9yy1oO5j");

numbers is now going to be:

[ 666555444333222L ]

Decoding a single id

By default, Decode and DecodeLong will return an array. If you need to decode just one id you can use the following helper functions:

var hashids = new Hashids("this is my pepper");
number = hashids.DecodeSingle("NkK9");

number is now going to be:

12345
var hashids = new Hashids("this is my pepper");

if (hashids.TryDecodeSingle("NkK9", out int number)) { // Decoding hash successfull. }

number is now going to be:

12345

You can handle the exception to see what went wrong with the decoding:

var hashids = new Hashids("this is my pepper");
try
{
    number = hashids.DecodeSingle("NkK9");
}
catch (NoResultException) { // Decoding the provided hash has not yielded any result. }

number is now going to be:

12345
var hashids = new Hashids("this is my pepper");
number = hashids.DecodeSingleLong("KVO9yy1oO5j");

number is now going to be:

666555444333222L
var hashids = new Hashids("this is my pepper");

if (hashids.TryDecodeSingleLong("NkK9", out long number)) { // Decoding hash successfull. }

number is now going to be:

666555444333222L
var hashids = new Hashids("this is my pepper");
try
{
    number = hashids.DecodeSingleLong("KVO9yy1oO5j");
}
catch (NoResultException) { // Decoding the provided hash has not yielded any result. }

number is now going to be:

666555444333222L

Decoding with different salt

Decoding will not work if salt is changed:

var hashids = new Hashids("this is my pepper");
numbers = hashids.Decode("NkK9");

numbers is now going to be:

[]

Encoding several numbers

var hashids = new Hashids("this is my salt");
var hash = hashids.Encode(683, 94108, 123, 5);

hash is now going to be:

aBMswoO2UB3Sj

Decoding is done the same way

var hashids = new Hashids("this is my salt");
var numbers = hashids.Decode("aBMswoO2UB3Sj")

numbers is now going to be:

[ 683, 94108, 123, 5 ]

Encoding and specifying minimum hash length

Here we encode integer 1, and set the minimum hash length to 8 (by default it's 0 -- meaning hashes will be the shortest possible length).

var hashids = new Hashids("this is my salt", 8);
var hash = hashids.Encode(1);

hash is now going to be:

gB0NV05e

Decoding

var hashids = new Hashids("this is my salt", 8);
var numbers = hashids.Decode("gB0NV05e");

numbers is now going to be:

[ 1 ]

Specifying custom hash alphabet

Here we set the alphabet to consist of: "abcdefghijkABCDEFGHIJK12345"

var hashids = new Hashids("this is my salt", 0, "abcdefghijkABCDEFGHIJK12345")
var hash = hashids.Encode(1, 2, 3, 4, 5)

hash is now going to be:

Ec4iEHeF3

Randomness

The primary purpose of hashids is to obfuscate ids. It's not meant or tested to be used for security purposes or compression. Having said that, this algorithm does try to make these hashes unguessable and unpredictable:

Repeating numbers

var hashids = new Hashids("this is my salt");
var hash = hashids.Encode(5, 5, 5, 5);

You don't see any repeating patterns that might show there's 4 identical numbers in the hash:

1Wc8cwcE

Same with incremented numbers:

var hashids = new Hashids("this is my salt");
var hash = hashids.Encode(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

hash will be :

kRHnurhptKcjIDTWC3sx

Incrementing number hashes:

var hashids = new Hashids("this is my salt");

hashids.Encode(1); // => NV
hashids.Encode(2); // => 6m
hashids.Encode(3); // => yD
hashids.Encode(4); // => 2l
hashids.Encode(5); // => rD

Encoding using a HEX string

var hashids = new Hashids("this is my salt");
var hash = hashids.EncodeHex("DEADBEEF");

hash is now going to be:

kRNrpKlJ

Decoding to a HEX string

var hashids = new Hashids("this is my salt");
var hex = hashids.DecodeHex("kRNrpKlJ");

hex is now going to be:

DEADBEEF

Changelog

v.1.7.0

  • PR #86 - Fix for decoding hash smaller than min length setting.
  • Project build targets now set to netstandard2.0, net6.0, net7.0.

v.1.6.1

  • PR #76 - Fix min buffer sizes.

v.1.6.0

  • PR #66 - Fixed invalid constant bug.
  • PR #67 and PR #73 - Update and cleanup tests and constants.
  • PR #65 - Improved performance and reduced allocations for single number decode.

v1.5.0

  • PR #59 and PR #61 - Project clean up and removal of net461 target.
  • PR #50 - Added support for .NET 6.
  • PR #49 - Optimized methods for single number encoding.
  • PR #57 - Optimized methods for single number decoding.
  • PR #54 and #58 - Fixed Github Actions build and test.
  • PR #55 - Removed System.Buffers dependency for .NET 5 and higher.
  • PR #47 - Improved performance with readonly and Span<T> usage.
  • PR #60 - Reference System.Memory to replace internal ReadOnlySpan<T> class.
  • PR #63 - Array and Span usage optimizations.
  • PR #62 - Documentation improvements.

1.4.1

  • PR #45 - Cleanup unused nuget references and replace Microsoft.Extensions.ObjectPool with internal implementation.

1.4.0

  • Modernized project with updated build targets now set to net461, net5.0, netstandard2.0
  • PR #30 - Fix floating-point math to handle large ratio of alphabet to separators.
  • PR #37 - Performance and memory optimizations.
  • PR #42 - Performance updates and added BenchmarkDotnet for profiling.
  • PR #43 - Improved performance and reduced allocations.
  • Issues #23, #32, #35 - Fix floating-point math, now replaced by Horner's method.
  • Issue #27 - Allow dashes in alphabet (dashes caused issues with Regex which is not used anymore).
  • Issue #21 - Fix encoding exception when decoding a character used as guard.
  • Issue #29 - Added tests to confirm thread-safety.

1.3.0

  • PR #26 - Support .netstandard2.0.

1.2.2

  • PR #19 - Only instantiate the HEX-connected Regexes if we use any of the HEX functions. This will speed up creation of "Hashids"-instances. It is likely that most users doesn't use the HEX-functions.

1.2.1

  • PR #11 - Speed up consistent shuffle with less string manipulation.
  • Issue #15 - Decoding strings that contain characters not in the alphabet will now return empty array. (To conform to behaviour in the js-library).
  • Issue #18 - Encoding with a negative number will now return empty string. (To conform to behaviour in the js-library).

1.2.0Added

  • Added .NET Core support.

1.1.2

  • Fixed issue #14 that caused HEX values to be encoded/decoded incorrectly.

1.1.1

  • Accepted PR #12 that fixed an issue when encoding very many longs at the same time

1.1.0

  • Added support for long via new functions to not introduce breaking changes.
    • EncodeLong for encodes.
    • DecodeLong for decodes.
  • Added interface IHashids for people who want an interface to work with.

1.0.1

  • The .NET 4.0 version of the package used .NET 4.5 as build target. This was fixed and a new version was pushed to nuget.

1.0.0

  • Several public functions marked obsolete and renamed versions added, to be more appropriate:

    • Function Encrypt() changed to Encode()
    • Function Decrypt() changed to Decode()
    • Function EncryptHex() changed to EncodeHex()
    • Function DecryptHex() changed to DecodeHex()

    Hashids was designed to encode integers, primary ids at most. We've had several requests to encrypt sensitive data with Hashids and this is the wrong algorithm for that. So to encourage more appropriate use, encrypt/decrypt is being "downgraded" to encode/decode.

0.3.4

  • The public functions are now virtual and therefor can be mocked with a mocking library.

0.3.3

  • Rewrote the code to support the new hashing algorithm.
  • Support for EncryptHex and DecryptHex

0.1.4

  • Initial version of the port.

hashids.net's People

Contributors

alexandermlharris avatar cereal-killa avatar daramant avatar dariogriffo avatar dompagoj avatar ethomson avatar fatemehfattahi avatar gkalamernikov avatar jehoel avatar jetersen avatar keterscp avatar luisrudge avatar manigandham avatar marcosmeli avatar matthewking avatar nicknightingale avatar sa1gur avatar sraybell avatar tallesl avatar ullmark avatar xilapa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hashids.net's Issues

How use id with Guid/UUID/Sequential Guid

So, sequential Guid can be predictable
Or even with normal Guids, could be necessary to have small URLs

What is the best way to approach this?
Is it okay to hash it as a byte array with this library?

Thread safe

Is instance of hashids.net thread safe? Can I use it as singleton in my app?
new Hashids("this is my salt");

Encrypting / Decrypting multiple numbers including zeros

Hey mate, cool library. I am running into some issues when with zeros:

var hashids = new Hashids("this is my salt");
var hash = hashids.Encrypt(1, 2, 0); //produces KjfAI
var numbers = hashids.Decrypt(hash); //returns empty array

If I change the order of the numbers it seems to work correctly.

var hashids = new Hashids("this is my salt");
var hash = hashids.Encrypt(0, 1, 2); //produces AUXt9
var numbers = hashids.Decrypt(hash); //returns [ 0, 1, 2 ]

Question: Is the hash algorithm "future-proof"?

As I understand it, certain offensive or inappropriate words are omitted when generating a hash. Cool. What if the dirty words list get longer as time goes by? Will it change the logic of the hashing algorithm then? Will I be able to de-hash something I hashed today 10 years in the future?

Thanks!

Attempting to Decode short strings results in incorrect exceptions being produced

1 character length strings produce:
System.IndexOutOfRangeException: Index was outside the bounds of the array.
at HashidsNet.Hashids.GenerateHashFrom(Int64 number, Span`1& result)
at HashidsNet.Hashids.GetNumberFrom(String hash)
at HashidsNet.Hashids.DecodeSingle(String hash)

2 character length strings produce:
System.ArgumentException: Destination is too short. (Parameter 'destination')
at HashidsNet.Hashids.GenerateHashFrom(Int64 number, Span`1& result)
at HashidsNet.Hashids.GetNumberFrom(String hash)
at HashidsNet.Hashids.DecodeSingle(String hash)

for any "hash" strings of 1 or 2 character length. NoResultException should be produced in these cases.

Incorrect value when decoding hex

Hi,

It looks like the hex decoding isn't working. I tried the example on the project page and it is losing the first character:

    var hashids = new Hashids("this is my salt");
    var hash = hashids.EncodeHex("DEADBEEF");  //kRNrpKlJ
    var decoded = hashids.DecodeHex(hash);  //EADBEEF
    hashids.DecodeHex("kRNrpKlJ");  //EADBEEF

On longer hex values it is missing more characters:

    var hashids = new Hashids("this is my salt");
    var hash = hashids.EncodeHex("1234567890ABCDEF"); //57DeZ32Xq1t7X5y
    var decoded = hashids.DecodeHex(hash); //67890ABCDEF

Thanks,

Val

.Net version not working on xamarin forms

I have tried to install hashids into a xamarin froms plc app and it doesn't add properly.

This is the error:

Could not install package 'System.Runtime.InteropServices.RuntimeInformation 4.0.0'. You are trying to install this package into a project that targets '.NETPortable,Version=v4.5,Profile=Profile111', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.

It could be incredibly useful if you can give support to xamarin!!
Thanks!! :)

CoreCLR Support

Drop in source copy doesn't seem to compile under DNX Core RC1:

C:\projects\bogus\Source\Bogus\Hashids.cs(363,35):
DNXCore,Version=v5.0 error CS0117: 'string' does not contain a definition for 'Copy'

Any ideas?

Thanks,
Brian

HashIds.net does not work in Azure Functions

When bundled with Azure Functions using dotnetcore3.1, hashids.net does not work in runtime and throws following exception Could not load file or assembly 'Microsoft.Extensions.ObjectPool, Version=5.0.0.0, Culture-neutral ...

What is the workaround to use hashids in Azure Functions ?

Thread safety

On a quad core/4hz machine with .NET Core 3.1 using static vs creating a new instance is about 110 times slower (using encode/decode) testing with 1M integers.

Using a single instance is super fast, so due to performance gains I'd like to use a single instance and share it across threads. So therefore it brings into question thread safety.

I've been testing encode/decode with Parallel.For and I don't see any issues. No locks and ids generated are valid compared to creating new instances.

I'd appreciate if you can confirm thread safety for this library, due to performance gains its worth using a single instance instead of just creating a new instance every time.

Valid use-case for library?

Hello! I came across your library and had a question about a possible use case; I wanted to ask if this is a valid usage and makes sense for how hashing works. Let's take it step-by-step:

  1. Query for some entity which has an ID and concurrency token
  2. Hash ID using concurrency token
  3. Update entity using hashed ID
  4. Try to decode hashed ID with concurrency token in table
  5. Update database with new data using ID and token

Except I would never be able to decode the hashed ID because I wouldn't know the primary key, so I wouldn't know which concurrency token to use. Is there any support for partial decoding? I suppose that would defeat the purpose of hashing if that were possible, but I thought I would ask anyway.

Hopefully that makes sense! Let me know if something like this is possible, or if that's not something that would work with your library. Thanks!

Return specific length of Hashid

Hello, i have this implementation in my code but the results does not return only 8 hash characters.
Hashids hashids = new Hashids(salt, 8, "0123456789abcdefghijklmnopqrstuvwxyz");
string hash = hashids.EncodeLong(666555444333222L);

Is this possible to return only 8 characters?

Optimization: decode a single ID without allocating an array

Hi,

I'd like to propose an optimization for the Decode method. I'll be decoding a single ID most of the time, so it might make sense to add add the following methods:

//decode a single int ID from a hash
int DecodeSingle(string hash);

//decode a single long ID from a hash
long DecodeSingleLong(string hash);

This approach would not require allocating an array for returning single values.

Version upgrade from 1.3.0 to 1.4.0 produce different hashes

Problem

Version upgrade should not change hash, if salt, alphabet, minimum length have not been changed. Tested with .NET 6 console application.

Results:

v1.4.0

HashID encode: id1: 500, id2: 60123, Encoded: qyrFRr7jCY, Should Be: 58QtxV6Qsw
HashID decode: qyrFRr7jCY, Decoded: [500,60123,0]

v1.3.0

HashID encode: id1: 500, id2: 60123, Encoded: 58QtxV6Qsw, Should Be: 58QtxV6Qsw
HashID decode: 58QtxV6Qsw, Decoded: [500,60123,0]

Code to reproduce

using System.Text.Json;

const string SALT = "OhMySaltiestSalt+2!!!!";
const string ALPHABET = "bcdfghjkmnpqrstvwxyzBCDFGHJKLMNPQRSTVWXYZ3456789";
const int MINUMUM_LENGTH = 10;
const string DEFAULT_SEPS = "cfhistuCFHISTU";

var id1 = 500;
var id2 = 60123;

HashidsNet.Hashids hashids = new HashidsNet.Hashids(SALT, MINUMUM_LENGTH, ALPHABET, DEFAULT_SEPS);
string hashid = hashids.Encode(id1, id2, 0);
string shouldBe = "58QtxV6Qsw";

Console.WriteLine($"HashID encode: id1: {id1}, id2: {id2}, Encoded: {hashid}, Should Be: {shouldBe}");
Console.WriteLine($"HashID decode: {hashid}, Decoded: {JsonSerializer.Serialize(hashids.Decode(hashid))}");

Decoding input smaller than minHashLength unhandled exceptions are thrown

Seems like that when decoding input with length smaller than minHashLength the Decode throws System.ArgumentException & System.ArgumentOutOfRangeException exceptions

Expected Behavior

I guess it would be to validate the input before doing a .Decode() and either throw meaningful exception or return empty array

Current Behavior

Currently. the library sometimes throws (undocumented?) System.ArgumentException & System.ArgumentOutOfRangeException exceptions.

Possible Solution

Have a proper validation of the input before doing any actions or something else

Steps to Reproduce

  1. new HashidsNet.Hashids(salt: "Dqa2s3RJBYPHUzg&R5qkF3Z4HLaWp#A^kMc^DqKVmqag2tasQjhz-PSM23=4", minHashLength: 9)
  2. .Decode("5111111")
  3. System.ArgumentException: Destination is too short. (Parameter 'destination')

&

  1. new HashidsNet.Hashids(salt: "Dqa2s3RJBYPHUzg&R5qkF3Z4HLaWp#A^kMc^DqKVmqag2tasQjhz-PSM23=4", minHashLength: 10)
  2. .Decode("5111111")
  3. System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.

Context (Environment)

.NET version: ASP.NET Core 6.0 - basic API controller
Hashids.net version: 1.6.1
StackTrace 1:

System.ArgumentException: Destination is too short. (Parameter 'destination')
   at System.Text.StringBuilder.CopyTo(Int32 sourceIndex, Span`1 destination, Int32 count)
   at HashidsNet.Hashids.GenerateHashFrom(ReadOnlySpan`1 numbers, Span`1& result)
   at HashidsNet.Hashids.GetNumbersFrom(String hash)
   at HashidsNet.Hashids.Decode(String hash)
   at WebApplication1.Controllers.WeatherForecastsApi2Controller.GetWeatherForecast2()

StackTrace 2:

System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
   at System.Text.StringBuilder.CopyTo(Int32 sourceIndex, Span`1 destination, Int32 count)
   at HashidsNet.Hashids.GenerateHashFrom(ReadOnlySpan`1 numbers, Span`1& result)
   at HashidsNet.Hashids.GetNumbersFrom(String hash)
   at HashidsNet.Hashids.Decode(String hash)
   at WebApplication1.Controllers.WeatherForecastsApi2Controller.GetWeatherForecast2()

Possible Implementation

/

Documentation - Salt maximum size

I had an issue with the salt size while testing an application I was creating and it was frustrating that I could not find anywhere the salt limitation, I only realized after a few tests. Did I miss something?

I tried with 5, 7, 10 and 15 hashid sizes and the result is the same.

Length: 40 | G596a7LjZE | 2Nda1MaEwz
Length: 41 | Nm9Vao3Rbr | 6Zw3Ll37Np
Length: 42 | kxmv3QVBGd | gXQ3v0aepv
Length: 43 | gXQ1v0Ve2J | gXQVv0apme
Length: 44 | ALmv18aPQN | ALma18VPQN
Length: 45 | Z1ma1v8zxO | Z1ma1v8zxO

So I was wondering if there is anything related to salt size on the documentation and I just missed or if we need to add something.

I also read that the salt can't be bigger than the alphabet, so, using the default alphabet (a-z A-Z 0-9) we would have around 60+ characters, technically the salt can have maximum 60ish characters. (?)

The whole point of this issue is: It's not really clear on the documentation. Maybe add something related to this scenario?

How do I get HashIDs of a specific length?

I have input numbers of a specific range and length, created in a specific way. However, I need the generated hashes to be of a specific output length. Is this possible?

Currently, the constructor allows one to set a minimum length, but not a maximum.

Bug on long encoding with 16 alphabet chars

below code throw System.IndexOutOfRangeException

var hashids = new Hashids("salt", alphabet:"0123456789ABCDEF");
var hash = hashids.EncodeLong(long.MaxValue);

full exception text:

System.IndexOutOfRangeException: Index was outside the bounds of the array.
   at HashidsNet.Hashids.BuildReversedHash(Int64 input, Char[] alphabet, Char[] hashBuffer)
   at HashidsNet.Hashids.GenerateHashFrom(Int64[] numbers)
   at HashidsNet.Hashids.EncodeLong(Int64[] numbers)
   at Program.<<Initialize>>d__0.MoveNext() in :line 7

Using multiple options for encoding/decoding

Hi,
Actually this might be a feature request.
Now Hashids is injecting as a singleton with the only salt and other settings. I'd like to use different salt and encoded id length in certain situations. Is there a way to do this in current version? Or, if not, could you please think about it?
Thank you.

HashIds not working after adding custom Output formatter

I am using Hashids library in my .NET 6 API to encode int values into string values in my model before sending it off to the API caller. The library is currently working fine.

Recently I have a requirement to provide pascal cased json from the API (default it was returning camel case). In order to achieve this I have gone in the path of sending an Accept header with a custom value and added an output formatter to handle the pascal case conversion.

public class PascalCaseFormatter : SystemTextJsonOutputFormatter { public PascalCaseFormatter(JsonSerializerOptions jsonSerializerOptions) : base(jsonSerializerOptions) { SupportedMediaTypes.Clear(); SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("application/json;profile=pascal")); } }

then registered it in Startup.cs

`
services.AddHashids(setup =>
{
setup.Salt = Configuration["HashConfiguration:Salt"];
setup.MinHashLength = Convert.ToInt32(Configuration["HashConfiguration:MinHashLength"]);
setup.Alphabet = Configuration["HashConfiguration:Alphabet"];
});

services.AddControllers(options =>
{
options.OutputFormatters.Add(
new PascalCaseFormatter(new JsonSerializerOptions { PropertyNamingPolicy = null }));
});
`

In the accept header if I send as application/json;profile=pascal I get pascal cased json result for any model which does not have a hashid converter in it. When I have a model as below, then an error is thrown from the hashids library mentioning that the service provider is not found.

public class ApplicantAccountVm { [JsonConverter(typeof(HashidsJsonConverter))] public int ApplicantAccountId { get; set; } }
Result for this model,

image

What should I do to register hashid in this situation?

Regarding HashidsNet

error: There are no versions available for the package 'HashidsNet'. for Dotnet

Decode null string thows NullReferenceException

We were using the 1.4.1 version of this library and decoding a null string returned an empty array. But after upgrading to the 1.7.0 version it throws a NullReferenceException.

With git bisect I found that the bug was introduced in the commit 480cabe.

The bug is in the long[] Hashids.GetNumbersFrom(string hash) method. If hash is null, the variable result will be an empty array, which is expected but in the next line of GetNumbersFrom it uses the Length property of hash, which is null and the NullReferenceException is thrown.

After creating the issue I'll create a PR with the unit tests and the fix.

Same hash different integers

Hi guys

I am using this package and I really love it.
However, I have run into a duplicate issue where these two Ids are hashed to the same 17 charecter string:
518441536
520571136

the resulting hash : gOnbdg2ENEBPRP5mY
that is of coursing using the same Salt.

I see there was the same issue opened last year Nov and the person said they were mistaken but I am pretty sure there is an issue here.

I am actively using this package and just last week i started seeing this problem.

@ullmark is this something that is being worked on to find the root cause?

Decoding big numbers fails when using custom alphabet.

Hi,

I'm getting some invalid results with the library when decoding large numbers. I discovered this whilst I was adding some unit tests to stress test my implementation.

It occurs when I have a custom alphabet. I've not tried other alphabets.

For the purposes of setting the salt, I've used a guid and I've also set the min hash length to 15, and all tests use the same values.

I'm hoping it's just a range thing caused by this particular alphabet, but perhaps other custom alphabets can error like this.

Of course, doubt I'd ever see ids that big, but it did concern me a bit when the unit test on long.MaxValue failed, and spurred me on to find what the highest number I could decode correctly is ( it's 9549259626800016 , see below).

The hash for long.MaxValue in this case is 5jEpN7mlkn0BRvq

Passing Example

As a control on my salt and min hash length, here is a basic example where I do not customise the alphabet, and the test passes as expected:

[Fact]
public void DefaultAlphabet()
{
    var longToTest = long.MaxValue;

    string SALT = "0741cbe4-2140-4615-8b5b-93294aa52c9f";
    int MIN_HASH_LENGTH = 15;

    var hashids = new HashidsNet.Hashids(SALT, MIN_HASH_LENGTH);
    var hashed = hashids.EncodeLong(longToTest);
    var decoded = hashids.DecodeLong(hashed);

    if (decoded.Length == 0)
    {
        throw new Exception("No result returned");
    }

    Assert.Equal(longToTest, decoded[0]);
}

Failing Examples

Here is an example where I customise the alphabet, by removing all instances of i (lower and upper) as well as the number 1. In this case, no result is returned and the exception "No result returned" is thrown.

I found that the highest number I can get to pass is 9549259626800016. If I try to run the custom alphabet test with 9549259626800017, it fails with the same exception as long.MaxValue.

long.MaxValue fails ๐Ÿ‘Ž

[Fact]
public void HashIdsCustomSanityTest()
{
    var longToTest = long.MaxValue;

    string SALT = "0741cbe4-2140-4615-8b5b-93294aa52c9f";
    int MIN_HASH_LENGTH = 15;
    string ALPHABET = "abcdefghjklmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ234567890";

    var hashids = new HashidsNet.Hashids(
        SALT, MIN_HASH_LENGTH, ALPHABET
        );
    var hashed = hashids.EncodeLong(longToTest);
    var decoded = hashids.DecodeLong(hashed);

    if (decoded.Length == 0)
    {
        throw new Exception("No result returned");
    }

    Assert.Equal(longToTest, decoded[0]);
}

9549259626800016 passes ๐Ÿ‘

[Fact]
public void HashIdsCustomSanityTest()
{
    var longToTest = 9549259626800016;

    string SALT = "0741cbe4-2140-4615-8b5b-93294aa52c9f";
    int MIN_HASH_LENGTH = 15;
    string ALPHABET = "abcdefghjklmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ234567890";

    var hashids = new HashidsNet.Hashids(
        SALT, MIN_HASH_LENGTH, ALPHABET
        );
    var hashed = hashids.EncodeLong(longToTest);
    var decoded = hashids.DecodeLong(hashed);

    if (decoded.Length == 0)
    {
        throw new Exception("No result returned");
    }

    Assert.Equal(longToTest, decoded[0]);
}

9549259626800017 fails ๐Ÿ‘Ž

Same exception thrown as above.

[Fact]
public void HashIdsCustomSanityTest()
{
    var longToTest = 9549259626800017;

    string SALT = "0741cbe4-2140-4615-8b5b-93294aa52c9f";
    int MIN_HASH_LENGTH = 15;
    string ALPHABET = "abcdefghjklmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ234567890";

    var hashids = new HashidsNet.Hashids(
        SALT, MIN_HASH_LENGTH, ALPHABET
        );
    var hashed = hashids.EncodeLong(longToTest);
    var decoded = hashids.DecodeLong(hashed);

    if (decoded.Length == 0)
    {
        throw new Exception("No result returned");
    }

    Assert.Equal(longToTest, decoded[0]);
}

Using a similar salt produces same Encoding/Decoding with a large salt size.

I'm using this library to mask the Ids for a Web Api. I'm using a constant Guid + the name of the resource to produce the salt. I noticed that if the salts are only different when over 42 characters long then the Masks are identical.

I threw together a sample to show what I mean.

    class Program
    {
        static string formatString =
            "Length: {0}" + Environment.NewLine +
            "Num Equal: {1}" + Environment.NewLine +
            "Num Different: {2}" + Environment.NewLine;

        static void Main(string[] args)
        {
            for(int i = 0; i < 100; i++)
            {
                var numEqual = 0;
                var numDifferent = 0;
                for(int j = 0; j  < 100; j++)
                {
                    var saltBase = RandomString(i);

                    var salt1 = saltBase + "AAAA";
                    var salt2 = saltBase + "ZZZZ";

                    var hasher1 = new HashidsNet.Hashids(salt1);
                    var hash11 = hasher1.Encode(1);
                    var hash12 = hasher1.Encode(2);
                    var hash13 = hasher1.Encode(3);

                    var hasher2 = new HashidsNet.Hashids(salt2);
                    var hash21 = hasher2.Encode(1);
                    var hash22 = hasher2.Encode(2);
                    var hash23 = hasher2.Encode(3);

                    if (hash11 == hash21
                        && hash12 == hash22
                        && hash13 == hash23)
                    {
                        numEqual++;
                    }
                    else
                    {
                        numDifferent++;
                    }
                }
                Console.WriteLine(string.Format(formatString, i, numEqual, numDifferent));
            }
        }

        public static string RandomString(int length)
        {
            var str = "";
            do
            {
                str += Guid.NewGuid().ToString().Replace("-", "");
            }

            while (length > str.Length);

            return str.Substring(0, length);
        }
    }

Sample Output:

Length: 41
Num Equal: 0
Num Different: 100

Length: 42
Num Equal: 96
Num Different: 4

Length: 43
Num Equal: 90
Num Different: 10

Length: 44
Num Equal: 100
Num Different: 0

I switched salt generation to be resource name + Guid() and I have much better results with only an odd single overlap every now and then.

Am I using the library wrong?

Is there a guarantee of encoded sequence uniqueness with differing salts?

We're currently using hashids to generate short-form ids based on a sequence, so that ids are human readable.

We're producing "friendly" ids by tracking sequences within "categories", e.g. "Cat1" = 1, "Cat2" = 4, and so generating a new ID is a process of:

  1. Incrementing the sequence value
  2. Requesting Hashids to encode the value of the sequence where the category is the salt

Are the encoded Hashids guaranteed to be unique between salts, e.g. does an encoded ID for salt 'B' NEVER show up in the encoded sequence for salt 'B' (and vice versa)?

Encoded hash is not being decoded (rounding bug)

I have a reproducable bug in which certain input does create a valid hashid, but the Decode fails!

Repro code where the result is empty, instead of equal to the source:

var hashId = new Hashids(salt: "0Q6wKupsoahWD5le", alphabet: "abcdefghijklmnopqrstuvwxyz1234567890!", seps: "cfhistu");

var source = new long[] { 35887507618889472L, 30720L, Int64.MaxValue };

var encoded = hashId.EncodeLong(source);

var result = hashId.DecodeLong(encoded);

// ASSERT result == source FAILS

(Both the first (35887507618889472L) and the last (Int64.MaxValue) long values fail here)

After some investigation of my own, it turns out the root cause is the use of the Math.Pow function in the Unhash function. Due to its double nature, it is subject to rounding errors.

Proposed solution: update the Unhash function with the following code:

private long Unhash(string input, string alphabet)
{
    long number = 0;
    var alphabetLength = new System.Numerics.BigInteger(alphabet.Length);

    for (var i = 0; i < input.Length; i++)
    {
        var pos = (long)alphabet.IndexOf(input[i]);
        number += (pos * (long)System.Numerics.BigInteger.Pow(alphabetLength, input.Length - i - 1));
    }

    return number;
}

Hashids not generating with proper minhashlength

We are using hashsalt = blacksalt, minhashlenth = 4 and hasalphabet =abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ

When we try to encode the below it generate value with different min length.

**Input:**39300
output: dfrc (output is dummy but perfect as min length is 4)

input: 40000
output: ddfdf (output is dummy but perfect as min length is 5)

Please guide us how to get response of 40000 input with min length 4.

Support for long

Hi,

I was trying to generate hashid for long data type but it doesn't seem to have support?

Decoding fails with custom separators

Please consider the example below. The only thing I've done was to remove the last letter ('U') from the separators used by default. After removing just this one letter, the decoding function will fail to decode the string and generates an empty array instead.

long[] input = new long[] { 1021432904621883393 };

Hashids target = new Hashids(seps: "cfhistCFHIST"); // default is "cfhistuCFHISTU"

// Encode
string hashid = target.EncodeLong(input); // hashid will be set to "uXGMvwPWDRuwv"

// Decode
long[] decoded = target.DecodeLong(hashid); // decoded will be set to an empty long[]

Assert.Equal(input, decoded); // fails

IndexOutOfRangeException when decoding string with chars not in alphabet

I'm having a similar issue to #8, but I'm not sure if I've misunderstood something about how HashIds is supposed to work.

My test case is like so:

[TestMethod]
public void HashIdsBug()
{
    var encoder = new Hashids("1234", 6, "ABCDEFGHIJKLMNOPQRSTUVWXY");
    var decoded = encoder.Decode("ZZZZZZ");
}

This throws, because the Zs are not in the alphabet they end up being decoded to a negative number, and when Hashids tries to re-hash them to check the salt, it throws because it tries to index an array with a negative number.

I'm just not sure if this is by design? It's like this even on the hashids.org demo (if you change the variables to those above, it crashes). I would think that Hashids should be resistant to this sort of thing, same as decoding with the wrong salt.

If this is a bug, it can be fixed easily by either checking if any decoded numbers are < 0 before trying to re-encode them, or checking the input hash for any characters which aren't in the alphabet. @ullmark I'm happy to submit a pull request if you agree that this is broken.

Same id generated for different integers

I've generated two id's for these numbers: 205, 449 (I've called twice the method)
I'm using salt, which is this: "Models.Rendszer.Cimke_ohd3mjNekCk9e9iL"
Min. hash length: 18
The encode provided the same id for the numbers: "1YOvmed9z46Lyn3w0E"
How is it possible?

IndexOutOfRangeException when decoding string with guard chars

Hi @ullmark ,

I've ended up in a scenario where Decode method throws an exception when we pass any of the characters in the "guard" generated by SetupGuards method.

There's how you reproduce:

// this generates the following guards: rKEa
var hashids = new Hashids("please fix me <3", 15); 

// passing any of the chars defined in the guard may throw and exception
Action invocation = () => hashids.Decode("a");

invocation.ShouldThrow<IndexOutOfRangeException>();

And here's what happens in GetNumbersFrom(string hash) method, please find below:

(...)

// by passing a guard char this replace fails to add a separator (" ")
var hashBreakdown = guardsRegex.Replace(hash, " ");

// hashArray will be empty since there's no  " " to split..
var hashArray = hashBreakdown.Split(new[] { ' ' }, StringSplitOptions.RemoveEmptyEntries);

if (hashArray.Length == 3 || hashArray.Length == 2)
       i = 1;

//  ...and this line fails.
hashBreakdown = hashArray[i];

I'm not sure what the guards stands for in that case (didn't made much effort there, eh) but I believe that's not how it's supposed to behave.

Kind regards,
Pedro.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.