Coder Social home page Coder Social logo

pmndrs / detect-gpu Goto Github PK

View Code? Open in Web Editor NEW
984.0 18.0 55.0 9.41 MB

Classifies GPUs based on their 3D rendering benchmark score allowing the developer to provide sensible default settings for graphically intensive applications.

License: MIT License

HTML 2.13% JavaScript 4.26% TypeScript 93.61%
gpu detection browser webgl webgl2 benchmarks progressive-enhancement adaptive device hardware

detect-gpu's People

Contributors

bastienrobert avatar dependabot[bot] avatar drcmda avatar gabriellebaudy avatar gordonnl avatar gsimone avatar gusted avatar nicknotfun avatar puckey avatar sidsethupathi avatar timvanscherpenzeel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

detect-gpu's Issues

ios detection broken?

I was just testing with my iphone (xs / ios 14.6) and noticed it was returning the default tier..

Looking into things, I see the magic pixel number returned by deobfuscateAppleGPU is not 801621810 or 8016218135. Instead it is returning 80162181255.

It then defaults to the renderer that was passed in, causing it to find nothing since it is obfuscated. When deobfuscateAppleGPU fails to find a matching magic pixel, I think it should return all ipad / iphone chipsets.

i.e. something like

      const { oldChipsets, newChipsets } = deviceInfo?.isIpad
        ? {
            oldChipsets: ['apple a12x gpu'],
            newChipsets: ['apple a9x gpu', 'apple a10 gpu', 'apple a10x gpu']
          }
        : {
            oldChipsets: ['apple a9 gpu', 'apple a10 gpu'],
            newChipsets: [
              'apple a11 gpu',
              'apple a12 gpu',
              'apple a13 gpu',
              'apple a14 gpu'
            ]
          };
      renderers = (
        {
          // iPhone 11, 11 Pro, 11 Pro Max (Apple A13 GPU)
          // iPad Pro (Apple A12X GPU)
          // iPhone XS, XS Max, XR (Apple A12 GPU)
          // iPhone 8, 8 Plus (Apple A11 GPU)
          '80162181255': newChipsets,
          // iPhone SE, 6S, 6S Plus (Apple A9 GPU)
          // iPhone 7, 7 Plus (Apple A10 GPU)
          // iPad Pro (Apple A10X GPU)
          '8016218135': oldChipsets
          // eslint-disable-next-line @typescript-eslint/no-explicit-any
        } as any
      )[pixels.join('')] ?? [...newChipsets, ...oldChipsets];

Automatic updates

It could be great to add a Github Action to update the benchmarks each week/months and to publish automatically a new release if there's changes in the benchmarks.

Wrong "device" name

Hello, thanks for this awesome tool for three.js but I found a little bug in my mobile phone.
I'm using "Samsung Galaxy S21 Ultra" but the live demo page in the README.md says the device it is "huawei mate 40 pro 5g". It's really wrong but probably it's guessed it from the GPU model because both that Huawei's SoC and S21U's SoC has same "ARM Mali-G78" gpu.
I think rather then guessing the phone it'll be more accurate to just leave the GPU model with device specific display info. Or this guessing algorithm needed to change with better one with some values to stand on more then just GPU model and display info. Because nearly all SoC other then Qualcomm ones uses Mali GPUs in Android devices and this can cause really trouble to developers who didn't get this bug while developing.

Linux with AMD integrated graphics is not detected correctly.

This is the string returned on Linux: amd, amd renoir (llvm 14.0.6), opengl 4.6) (Ryzen 5700G with vega 8)

After regex processing (str.replace(/\([^)]+\)/, "").match(/\d+/)), the matched graphics card model is 4 (opengl 4.6).

The final device returned is: amd radeon r4e

Wrong detection on iPad Pro (11", 2nd gen)

Hi,
looks like my iPad Pro 11" 2nd gen is recognised as iPad Air.

the Pro 11 2nd gen should be tier 3 and this could lead to wrong assumptions.

I tried to debug the library but I don't know what to check, I've printed some data from deobfuscateAppleGPU() method (console logs and screenshot below).

I also tried to run GFXBench Metal as double check, and I get a different GPU chipset from the one I get from detect-gpu

IMG_C6F7804E6B70-1

image

image

image

[Log] pixelId – "80162181255" (index.js, line 185)
[Log] renderers (index.js, line 236)
Array (11)
0 "apple a8 gpu"
1 "apple a8x gpu"
2 "apple a9 gpu"
3 "apple a9x gpu"
4 "apple a10 gpu"
5 "apple a10x gpu"
6 "apple a12 gpu"
7 "apple a12x gpu"
8 "apple a12z gpu"
9 "apple a14 gpu"
10 "apple m1 gpu"

[Log] chipsets (index.js, line 237)
Array (11)
0 ["a8", "8016218135", 15] (3)
1 ["a8x", "8016218135", 15] (3)
2 ["a9", "8016218135", 15] (3)
3 ["a9x", "8016218135", 15] (3)
4 ["a10", "8016218135", 15] (3)
5 ["a10x", "8016218135", 15] (3)
6 ["a12", "801621810", 15] (3)
7 ["a12x", "801621810", 15] (3)
8 ["a12z", "801621810", 15] (3)
9 ["a14", "801621810", 15] (3)
10 ["m1", "801621810", 15] (3)

ipad (6th generation) detected as ipad pro (11-inch)

Results:

{
  "device": "apple ipad pro (11-inch)",
  "fps": 116,
  "gpu": "apple a12x gpu",
  "isMobile": true,
  "tier": 3,
  "type": "BENCHMARK"
}

other debug info:

[Log] 801621810 (index.js, line 236)
[Log] {renderers: ["apple a12x gpu"]} (index.js, line 491)
[Log] queryBenchmarks - found type: – {type: "apple"} (index.js, line 402)
[Log] found 3 matching entries using version '12': – ["apple a12 gpu", "apple a12x gpu", "apple a12z gpu"] (3) (index.js, line 417)
[Log] apple a12x gpu matched closest to apple a12x gpu with the following screen sizes – "[[2224,1668,116,\"apple ipad pro (11-inch)\"],[2388,1668,106,\"apple ipad pro (11-inch) (2nd generation)\"],[2732,2048,60,\"apple ipad pr…" (index.js, line 435)`

Looks like the webgl hack is returning 801621810 instead of 8016218135...

The iPad is running ios 14.0.1

Higher frame rate reported on inferior GPU

Hi,

Thanks for build this tool!

I'm seeing a dubious result here on an 8 year old MacBook Pro, versus what I see on a much newer and more powerful iMac.

Apparently, the benchmark ranks the 8 year old laptop higher than thew new iMac.

Early 2013 MacBook Pro – Google Chrome

{
  "fps": 56,
  "gpu": "nvidia geforce gt 650m opengl engine",
  "isMobile": false,
  "tier": 2,
  "type": "BENCHMARK"
}

2017 iMac – Google Chrome

{
  "fps": 50,
  "gpu": "amd radeon pro 580 opengl engine",
  "isMobile": false,
  "tier": 2,
  "type": "BENCHMARK"
}

Happy to run tests/more benchmarks.

Macbook M1 Pro Safari 15.6 Not Working

Safari 15.6 on MacBook Pro (M1 Pro chip) is reporting incorrectly. It works fine with Chromium based browsers.

Safari

{
  "gpu": "apple gpu (Apple GPU)",
  "isMobile": false,
  "tier": 1,
  "type": "FALLBACK"
}

Edge

{
  "fps": 317,
  "gpu": "apple m1 pro",
  "isMobile": false,
  "tier": 3,
  "type": "BENCHMARK"
}

Lots of high-end GPUs report benchmark results capped at 60 or 144 FPS

For example "nvidia geforce rtx 3080 ti" reports 60 FPS, whereas "nvidia geforce rtx 3080 ti laptop gpu" has 190 FPS in the benchmark data. Is the FPS result limited by display refresh rate in some cases (but not in all)? This is a big problem for applications that need classification above 60 FPS.

FPS sometimes undefined on the result

My colleague got the following result (Ubuntu):

{
  "gpu": "google, swiftshader device (subzero) (0x0000c0de), swiftshader driver (ANGLE (Google, Vulkan 1.3.0 (SwiftShader Device (Subzero) (0x0000C0DE)), SwiftShader driver))",
  "isMobile": false,
  "tier": 1,
  "type": "FALLBACK"
}

Our code didn't handle correctly cases where fps is undefined - is this expected behavior?
I can open a PR to document or add fps: -1 or something like that - but wanted to know if this is a supported case.

Safari 12 crashes with powerPreference set to high-performance

Unfortunately I've just found out that safari 12 crashes when the powerPreference attribute is set to 'high-performance' in the canvas.getContext('webgl', attributes) method.

I'm not sure how long this has been the case.

Previously I made the pull request to include this attribute as it provides more accurate results for devices with multiple GPUs.

I guess a solution is to remove that attribute, or add a check for safari?

Brave always returns tier 0

Brave Browse on Intel 2019 Macbook Pro (Ventura 13.2) always reports tier 0.

Chrome on the same machine correctly reports tier 3.

Brave:

{
    "wasm": true,
    "simd": true,
    "webgl": true,
    "webgl2": true,
    "offscreen": true,
    "browser": {
        "name": "chrome",
        "version": "112.0.0",
        "os": "Mac OS",
        "type": "browser"
    },
    "gpu": {
        "fps": 14,
        "gpu": "amd radeon pro 5300m",
        "isMobile": false,
        "tier": 0,
        "type": "BENCHMARK",
        "hwaccel": false
    },
    "supportedInBrowserList": true
}

Chrome:

{
    "wasm": true,
    "simd": true,
    "webgl": true,
    "webgl2": true,
    "offscreen": true,
    "browser": {
        "name": "chrome",
        "version": "112.0.0",
        "os": "Mac OS",
        "type": "browser"
    },
    "gpu": {
        "fps": 130,
        "gpu": "amd radeon pro 5300m",
        "isMobile": false,
        "tier": 3,
        "type": "BENCHMARK",
        "hwaccel": true
    },
    "supportedInBrowserList": true
}

Hardware acceleration is enabled in Brave and confirmed to work on multiple sites.
Screenshot 2023-04-20 at 12 13 17 PM

GeForce RTX 3060 Ti is FALLBACK

I ran the test on a GeForce RTX 3060 Ti.
The results were "tier 1" and "FALLBACK".

{
    "isMobile": false,
    "tier": 1,
    "type": "FALLBACK"
}

This is a disproportionate result for the performance of the GPU.

Chrome, FireFox, and NewEdge all had the same result.
In all browsers, we have confirmed that WebGL allows web games to run comfortably.

Is this a problem caused by the fact that "GeForce RTX 3060 Ti" is not registered in the database?
Also, how can I contribute to update the database, if I can?

Safari desktop always returns Tier 1 apple gpu

Using Safari on a desktop machine always returns the following:

{ 
  device: undefined,
  fps: undefined,
  gpu: "apple gpu",
  isMobile: false,
  tier: 1,
  type: "FALLBACK"
}

Tested on a 2015 MBP and a 2020 MBP (M1).

Reporting a tier based on the rank of the gpu doesn't seem like the best idea

Why use the rank of the GPUs, rather than an actual benchmark score ? With that technique, if you have two GPUs with identical performance, but the separation between two tiers falls between the two, one will have a lower tier, despite having the same performance.

I imagine, the rank is a quick and easy way to get a more or less viable tier, but I would envision something more accurate where you could say "I want tier 2 to be all GPUS with performance above an Nvidia GTX 1050, tier 3 to be any GPU with performance above GTX 1080", etc. Of course, this could kinda work with the rank system

scripts/analytics_embed.js missing

I am looking at improving the tests and was wondering about the source of the analytics data.. the readme points me to scripts/analytics_embed.js, but this file is missing.. any chance it could be added?

Properly handling VR headset

Hi! I am currently working with an Occulus Quest 2 and I am trying to use detect-gpu for 3D visualization in order to manage the level of detail of the meshes that will be loaded in the scene. However, it seems that detect-gpu doesn't handle this type of device properly, since, besides the GPU field, all the other fields don't give correct information. The one field that has the biggest impact is the isMobile one since it is being labeled as false but a VR headset's hardware is much closer to a mobile device than a PC.

Is there something to be done?

DESKTOP GPU recognized as MOBILE GPU

Hello sir,
First I just want to thank you for creating this utility it is incredibly useful !

My problem is: So when i visit your demo site https://timvanscherpenzeel.github.io/detect-gpu/ I get the correct detection on my gpu Tier: GPU_DESKTOP_TIER_3
Type: BENCHMARK - nvidia geforce gtx 980 ti.

But I npm installed the package today and when i use it locally with this code:

import { getGPUTier } from 'detect-gpu';

const GPUTier = getGPUTier({
mobileBenchmarkPercentages: [0, 50, 30, 20], // (Default) [TIER_0, TIER_1, TIER_2, TIER_3]
desktopBenchmarkPercentages: [0, 50, 30, 20], // (Default) [TIER_0, TIER_1, TIER_2, TIER_3]
});

    console.log(GPUTier);

I get tier: GPU_MOBILE_TIER_1 type:FALLBACK

I have a canvas element using three.js already inside my html maybe that is somehow interfering i dont know.

[Question]: Is the FPS score actually a maximum potential FPS score?

I've been implementing this on a MapBox canvas to see performance scores as it is used. For example, we'll trigger this on the initial map load and also during a heavier animation which can sometimes show very obvious lag. I'm obviously going to make improvements but it would have been great to track what has the greatest effect as we make the changes.

However, I seem to get the same response from the library regardless of when the event is triggered:

{
  "fps": 198,
  "gpu": "apple m1",
  "isMobile": false,
  "tier": 3,
  "type": "BENCHMARK"
}

It's a bit of a misleading key name if that is the case. If not, any advice on getting something more representative of what users are actually experiencing?

Cheers.

hard to tell what the content is of benchmark updates

Due to the minified nature of the benchmark data, it is hard to tell how it is changing every week

Perhaps we could:

  • output beautified benchmark data using something like JSON.stringify(data, null, 2)
  • make sure the sorting of the data is consistent (if it isn't already)
  • add a seperate minification step (for the gzip file?)

Bug: detect-gpu crashes on SSR/SSG

Problem

detect-gpu will crash on SSR/SSG.

ReferenceError: window is not defined
    at /hoge/hoge/node_modules/detect-gpu/dist/detect-gpu.umd.js:1:2591
    at /hoge/hoge/node_modules/detect-gpu/dist/detect-gpu.umd.js:1:131

I know detect-gpu depends on window object, but getGPUTier returns early without window object, so I think this ReferenceError is unexpected behavior.

Early Return
https://github.com/TimvanScherpenzeel/detect-gpu/blob/23b80f1d0c02417804c4913ef0702524a56469b7/src/index.ts#L69-L75

https://github.com/TimvanScherpenzeel/detect-gpu/blob/23b80f1d0c02417804c4913ef0702524a56469b7/src/internal/deviceInfo.ts#L2

I think this line is the cause of this ReferenceError.

Reproduce

node

and

const { getGPUTier } = require("detect-gpu");

Notes

Can I fix this issue?

Incorrect NPM_TOKEN

I've forced some runs on the workflows and fixes some issues on the workflows branch, but now I'm hitting https://github.com/pmndrs/detect-gpu/runs/2842464460?check_suite_focus=true

 npm ERR! code EOTP
npm ERR! This operation requires a one-time password from your authenticator.
npm ERR! You can provide a one-time password by passing --otp=<code> to the command you ran.
npm ERR! If you already provided a one-time password then it is likely that you either typoed
npm ERR! it, or it timed out. Please try again.

Which indicates the now NPM_TOKEN is a normal token and should be changed to a automation token

CC @TimvanScherpenzeel

MBP reporting tier 1

Since the M1 update, my MBP gets tier1/24fps reported instead of tier3/60fps

2018 MBP
32 GB 2400 MHz DDR4
Radeon Pro Vega 20 4 GB
Intel UHD Graphics 630 1536 MB

Usually runs 60fps on the main monitor, and 30fps on my external (30 Hertz)
Main monitor (higher hertz, 60fps)

{
  "fps": 24,
  "gpu": "amd radeon pro vega 20",
  "isMobile": false,
  "tier": 1,
  "type": "BENCHMARK"
}

External monitor (30 hertz, 30fps)

{
  "fps": 30,
  "gpu": "amd radeon pro vega 20",
  "isMobile": false,
  "tier": 2,
  "type": "BENCHMARK"
}

M1 support

M1 devices currently report as Tier1, so people may catalog this as a low quality device, despite it being high quality
{ "gpu": "apple m1 pro (Apple M1 Pro)", "isMobile": false, "tier": 1, "type": "FALLBACK" }

Pass `gl` context

Creating a gl context is expensive.

Currently unless a forceRendererString is passed, the lib creates a new context.
Maybe we can add an option to pass an existing context?

Crash in an old browser. (iPad Mini 1 Safari 9)

The result of this module seems to be the ES6+.

After using this module, I confirmed that
it was inaccessible from a device using an old browser.
An error occurs because the old browser does not recognize the const keyword.

I tried to convert the module to ES5 arbitrarily,
but the following error occurs as the result value of next-transpile-modules + babel.
TypeError: I is not a function

Firefox

Message in log:

WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER.

And it reports tier 0:

{
  "fps": 1,
  "gpu": "radeon r9 200",
  "isMobile": false,
  "tier": 0,
  "type": "BENCHMARK"
}

Firefox 102.0.1 (64-bit).

Before upgrading Firefox was at version 85 which worked fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.