Detect Browser Graphics Performance

Detect Graphics card performance - JS

requestAnimationFrame (rAF) can help with this.

You could figure out your framerate using rAF. Details about that here: calculate FPS in Canvas using requestAnimationFrame. In short, you figure out the time difference between frames then divide 1 by it (e.g. 1/.0159s ~= 62fps ).

Note: with any method you choose, performance will be arbitrarily decided. Perhaps anything over 24 frames per second could be considered "high performance."

Detect actual available computing and processing power browser javascript

Of course, I know the best way is improve performance. There are lot's ways to improve performance and lot's of effort needed. The ideal path I list here:

1.reduce cost for network(compress\concat\cdn...)

2.reduce cost for cpu\memory(css-transform-gpu\reduce-dom-tree\virtual-dom\shadow-dom...)

3.[most important] keep everything in control, and chain up in promises. Image\css\js, you can use onload to trigger next; animation you can use js control through api (https://css-tricks.com/controlling-css-animations-transitions-javascript/), or use something like move.js. This proceeding just like a web game, you probably need a loading view too.

4.then you know It's idle or not

--- old answer ---

I made a npm package based on this idea, https://github.com/postor/cpu-stat-browser

--- old answer ---

I use setTimeout to detect cpu usage, the logs smaller , the cpu busier,
this will count in usages outside page/browser, you can start your job and stop this when the number in log is big enough.

let k = 0let profilling = truelet interval = nulllet period = 1000
console.log = (x) => { document.write(x + '<br>') window.scrollTo(0, document.body.scrollHeight);}
start()
interval = setInterval(() => { console.log(k) k = 0}, period)
async function start() {
while (profilling) { await step() }}
function stop() { clearInterval(interval) profilling = false}
async function step() { return new Promise((resolve, reject) => { setTimeout(() => { k++ resolve() }) })}

Detect if browser uses GPU accelerated graphics for render

The main issue at the time was that all of the PNG's on the screen had to recalculated and recompiled in the browser.

There were several things I had to do to to maximize performance:

  1. Always have predefined width and height attributes on images. What this does is let's the browser know what size the picture should be and when used together with scale() it won't recalculate and recompile those images. These things were very expensive. So basically if nothing else than scale() modified image size, everything was perfect and animations were awesome.
  2. Wherever possible avoid using visibility property, it literally acts like opacity: 0 keeping the element in the layout making layout recalculation much longer. Always where possible use display: none, this will completely eliminate element from the layout calculations. This was a major pitfall, because I had to re-think the UI to exclude visibility and I had to minimize used DOM node count.

Overall it was a huge adventure and big experience, hope this question/answer helps someone.

Consistent way to get a browser's refresh rate

Frame rate by running mean.

As the timers on browsers are not very accurate it is best to use a running mean.

A running mean is the average over the previous n frames.

From the running mean we can estimate the frame rate by selecting the closest frame rate above the mean

Example

The example implements the running mean via the object FrameRate

  • To create const rate = FrameRate(sample) where samples os the number of sample to take the mean of.

  • To add a sample assign any value to the property rate.tick it use performance.now to measure time.

  • rate.rate holds the current mean in fps. If there are not enough samples it will return a rate of 1

  • rate.FPS holds the number calculated frames per second. This is an estimate and will set the number of dropped frames when you get the value.

  • rate.dropped Is the estimated number of frames dropped per sample count at the current frame rate returned by rate.FPS Note dropped should be queried after FPS

function FrameRate(samples = 20) {
const times = [];
var s = samples;
while(s--) { times.push(0) }
var head = 0, total = 0, frame = 0, previouseNow = 0, rate = 0, dropped = 0;
const rates = [0, 10, 12, 15, 20, 30, 60, 90, 120, 144, 240];
const rateSet = rates.length;
const API = {
sampleCount: samples,
reset() {
frame = total = head = 0;
previouseNow = performance.now();
times.fill(0);
},
set tick(soak) {
const now = performance.now()
total -= times[head];
total += (times[head++] = now - previouseNow);
head %= samples;
frame ++;
previouseNow = now
},
get rate() { return frame > samples ? 1000 / (total / samples) : 1 },
get FPS() {
var r = API.rate, rr = r | 0, i = 0;
while (i < rateSet && rr > rates[i]) { i++ }
rate = rates[i];
dropped = Math.round((total - samples * (1000 / rate)) / (1000 / rate));
return rate;
},
get dropped() { return dropped },
};
return API;
}

const fRate = FrameRate();
var frame = 0;
requestAnimationFrame(loop);
fRate.reset();
function loop() {
frame++;
fRate.tick = 1;
meanRateEl.textContent = "Mean FPS: " + fRate.rate.toFixed(3);
FPSEl.textContent = "FPS: " + fRate.FPS;
droppedEl.textContent = "Dropped frames: " + fRate.dropped + " per " + fRate.sampleCount + " samples" ;
requestAnimationFrame(loop);
}
body {user-select: none;}
<div id="meanRateEl"></div>
<div id="FPSEl"></div>
<div id="droppedEl"></div>

THREE.JS - How To Detect Device Performance/Capability & Serve Scene Elements Accordingly

Browsers don't afford a lot of information about the hardware they're running on so it's difficult determine how capable a device is ahead of time. With the WEBGL_debug_renderer_info extension you can (maybe) get at more details about the graphics hardware being used, but the values returned don't seem consistent and there's no guarantee that the extension will be available. Here's an example of the output:

ANGLE (Intel(R) HD Graphics 4600 Direct3D11 vs_5_0 ps_5_0)
ANGLE (NVIDIA GeForce GTX 770 Direct3D11 vs_5_0 ps_5_0)
Intel(R) HD Graphics 6000
AMD Radeon Pro 460 OpenGL Engine
ANGLE (Intel(R) HD Graphics 4600 Direct3D11 vs_5_0 ps_5_0)

I've created this gist that extracts and roughly parses that information:

function extractValue(reg, str) {
const matches = str.match(reg);
return matches && matches[0];
}

// WebGL Context Setup
const canvas = document.createElement('canvas');
const gl = canvas.getContext('webgl');
const debugInfo = gl.getExtension('WEBGL_debug_renderer_info');

const vendor = gl.getParameter(debugInfo.UNMASKED_VENDOR_WEBGL);
const renderer = gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL);

// Full card description and webGL layer (if present)
const layer = extractValue(/(ANGLE)/g, renderer);
const card = extractValue(/((NVIDIA|AMD|Intel)[^\d]*[^\s]+)/, renderer);

const tokens = card.split(' ');
tokens.shift();

// Split the card description up into pieces
// with brand, manufacturer, card version
const manufacturer = extractValue(/(NVIDIA|AMD|Intel)/g, card);
const cardVersion = tokens.pop();
const brand = tokens.join(' ');
const integrated = manufacturer === 'Intel';

console.log({
card,
manufacturer,
cardVersion,
brand,
integrated,
vendor,
renderer
});

Using that information (if it's available) along with other gl context information (like max texture size, available shader precision, etc) and other device information available through platform.js you might be able to develop a guess as to how powerful the current platform is.

I was looking into this exact problem not too long ago but ultimately it seemed difficult to make a good guess with so many different factors at play. Instead I opted to build a package that iteratively improves modifies the scene to improve performance, which could include loading or swapping out model levels of detail.

Hope that helps a least a little!

Is it possible to measure gpu/cpu performance with timing by using an onload event?

WebGL gives you only possibility to establish connection between javascript and shaders. But it doesn't supply any information about device performance.

It is possible to write small program with WebGL and guess client GPU performance with FPS calculation (http://jsfiddle.net/vZP3u/). With few different shaders it is possible to do useful guess. But in some cases testing can be considered as DOS attack and browsers dont (shouldn't) allow you to do that (tests might crush whole browser --> people will not visit your site).

Also these data can you get with WebGL: http://analyticalgraphicsinc.github.io/webglreport/

Summary, native javascript doesn't supply information about device. You can only run tests to guess some data. You can use ajax to get clue about download/upload speed, pure js for CPU performance or WebGL shader for GPU performance. But all tests are very dependent on what is client doing at the moment with his device.

In addition, browsers sometimes offer way to get some data. For example G chrome console.memory, but it is not crossbrowser method how to get something.

Is it possible to benchmark a user's system with javascript in the browser

Measure the time it takes to render a few frames of whatever you're doing and adjust the level of detail accordingly. If you don't mind the details changing on the fly, you could use continuous measuring.

What you don't want to do is make your user sit through a five-minute benchmark before they can do anything with your thing.



Related Topics



Leave a reply



Submit