Is CSS Faster When You Are Specific

Is CSS faster when you are specific?

In real world the speed difference would be negligible.

To be technical .container would be faster as it has fewer selectors to process.

Selectors have an inherent efficiency. The order of more to less efficient CSS selectors goes thus:

  1. ID, e.g. #header
  2. Class, e.g. .promo
  3. Type, e.g. div
  4. Adjacent sibling, e.g. h2 + p
  5. Child, e.g. li > ul
  6. Descendant, *e.g. ul a*
  7. Universal, i.e. *
  8. Attribute, e.g. [type="text"]
  9. Pseudo-classes/-elements, e.g. a:hover

In regards to your second question:

Is there a way to measure performance in CSS ?

Steve Souders put out an online test to measure performance of CSS that can still be accessed here.

There are better ways to measure performance nowadays, but this is a quick and easy resource you can play with.

Performance wise, does things like this even matter or does it all depend on text weight basically ?

The short answer is "not really".

The long answer is, "it depends". If you are working on a simple site there is really no point to make a fuss about CSS performance other than general knowledge you may gain from best practices.

If you are creating a site with tens of thousands of DOM elements then yes, it will matter.

Are more specific selectors faster to parse?

I don't think we can say that specificity improves parse speed. What it does slightly improve is the render speed. There are other factors that affect your render speed, such as the CSS file size and having to many rules redefining the same class again and again e.g.

A couple of recommendations for improving your renders:

  1. Define your basic rules for those elements you are going to use in general content (paragraphs, lists, bold, italics...). This way you'll overwrite the default browser CSS rules for them.
  2. If there are rules not common for every <li> e.g., that are specific of a module, using the module parent class to define them will avoid the browser to crawl the whole HTML and you won't have to overwrite rules afterwards when you use the same tag for a different design piece.
  3. If there are several levels of child nodes use classes for every of them that needs to be customised instead of adding more levels hanging of the parent class, otherwise you'll end with large chains of hierarchic classes when you need exceptions.
  4. Try creating CSS modules that you can recycle across all your design applying a single class that define that modular element.
  5. Avoid using * on its own if possible. Nowadays is not so much problem as it was 20 years ago, but the more specific the better for speed.
  6. Don't abuse of :before and :after, use them responsibly. This are pseudo-elements that modify the DOM on the fly and sometimes, some browsers do not render them properly.
  7. Try using CSS shorthands as a general rule.
  8. If you use baground patterns, balance image size and repeat frequency. Create a pattern of 40px png or gif instead of using the minimum size of 4px e.g. as the browser will have to render the image 10 times less which worth the small extra file size.
  9. Use sprites for icons an similar elements but not too big. You can make an sprite per color. I also recommend to do it vertically, all in a column. That way you can find icons easily using something like background-position-y: calc(-8 * $module). This method will save you a lot of css rules to define the position and also HTML elements as you won't have to cluster the bg images in both axis (x,y)
  10. If you are using tags always add with and height attributes inline. And if those images are .jpg use a compression of about 60 and saved as progressive jpg.

I think with this basic rules you won't have any render problem in most of the cases.

h1 {
margin: 16px 8px;
font-size: 24px;
color: #999966;
}
p {
margin: 8px 8px 16px;
}
.thumnnail {
float: left;
margin: 0 8px;
}
.content, aside {
display: table-cell;
}
.content {
width: 75%;
}
.content ul {
margin: 8px;
padding: 8px 16px ;
}
.content li {
margin:8px 0;
}
.tabs {
margin: 0;
padding: 0;
background: #ddd;
}
.tab {
list-style: none;
padding: 8px;
}
.tab + .tab {
border-top: solid 2px #fff;
}
<section class="content">
<h1>My title</h1>
<img class="thumnnail" src="https://fakeimg.pl/250x100/" title="my image"/>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p>
<p>Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p>
<p>Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.</p>
<ul>
<li>Excepteur sint occaecat cupidatat</li>
<li>non proident sunt in culpa</li>
<li>qui officia deserunt mollit anim id est laborum.</li>
</ul>
</section>
<aside>
<ul class="tabs">
<li class="tab">Tab 1</li>
<li class="tab">Tab 2</li>
<li class="tab">Tab 3</li>
</ul>
</aside>

Which CSS selector is faster?

CSS performance check
It depends on browser.

External CSS vs inline style performance difference?

The performance boost that your friend mentioned is probably too trivial compared to the amount of performance boost (through other factors) using a CSS file.

Using the style attribute, the browser only paints the rule for that particular element, which in this case is the <div> element. This reduces the amount of look up time for the CSS engine to find which elements match the CSS selector (e.g. a.hover or #someContainer li).

However, putting styling at element level would mean that you cannot cache the CSS style rules separately. Usually putting styles in CSS files would allow the caching to be done, thus reducing the amount of load from the server each time you load a page.

Putting style rules at the element level will also make you lose track of what elements are styled what way. It might also backfire the performance boost of painting a particular element where you can repaint multiple elements together. Using CSS files separates the CSS from HTML, and thus allows you to make sure that your styles are correct and it's easier to modify later on.

Therefore if you look at the comparison, you would see that using a CSS file has much more benefit than styling at element level.

Not to forget when you have an external CSS stylesheet file, your browser can cache the file which increases your application efficiency!

Is !important bad for performance?

It shouldn't have any discernible effects on performance. Seeing Firefox's CSS parser at /source/layout/style/nsCSSDataBlock.cpp#572 and I think that is the relevant routine, handling overwriting of CSS rules.

It just seems to be a simple check for "important".

  if (aIsImportant) {
if (!HasImportantBit(aPropID))
changed = PR_TRUE;
SetImportantBit(aPropID);
} else {
// ...
}

Also, comments at source/layout/style/nsCSSDataBlock.h#219

    /**
* Transfer the state for |aPropID| (which may be a shorthand)
* from |aFromBlock| to this block. The property being transferred
* is !important if |aIsImportant| is true, and should replace an
* existing !important property regardless of its own importance
* if |aOverrideImportant| is true.
*
* ...
*/


  1. Firefox uses a top down parser written manually. In both cases each
    CSS file is parsed into a StyleSheet object, each object contains CSS
    rules.

  2. Firefox then creates style context trees which contain the end values
    (after applying all rules in the right order)

CSS Parser Firefox

From: http://taligarsiel.com/Projects/howbrowserswork1.htm#CSS_parsing

Now, you can easily see, in such as case with the Object Model described above, the parser can mark the rules affected by the !important easily, without much of a subsequent cost. Performance degradation is not a good argument against !important.

However, maintainability does take a hit (as other answers mentioned), which might be your only argument against them.

Performance when combining specific page styles AND global style in the same page

Yes, what you are doing is perfectly valid and common

CSS is perhaps a bad example, but the same principle ( load the last one in via ajax btw )

Like say, images.

We are on page 1 of our website and we know 99.999% of the time our visitors are going to click to page 2, and we know that on page 2 we have some large images to serve, yes, then we may load them silently AFTER page 1 has loaded - getting ready, then the site 'feels' fast as they navigate. A common trick in mobile web applications/sites/

So yes:

It is the same principle for ANY type of file that you may want to 'pre cache' for subsequent requests.

  • Load the page
  • while the visitor is 'reading' the loaded page, pre fetch files/data that
    you expect they may request next. ( images, page 2 of result data, javascript, and css ). These are loaded via ajax as to not hold up the page 'onload' event firing - a key difference from your example

However, To answer your goal - allow for the loading of the pages to be as fast as possible

Doing this, or any kind of 'pre emptive loading' technique, is minimal to 'speed of delivery' if we are not serving static files from a static server, a cookieless domain , and ultimately a Content Delivery Network.


Achieving the goal of allowing for the loading of the pages to be as fast as possible, is the serving of static files differently from your dynamic content ( php rendered et all )

1) Create a subdomain for these resources ( css, js, images/media ) - static.yourdomain.com

2) Turn off cookies, headers and tune cache headers specifically for this sub domain.

3) Look into using a service like http://cdnify.com/ or www.akamai.com.

These are the performance and speed steps for serving static content. ( hope no suck eggs, just directly related the question and if anyone is unfamiliar with this )

The 'pre emptive loading' techniques are still great,
but they are now more related to pre loading data for usability than they are for speed.


Edit/Update:

To clarify 'speed' and 'usability speed'.

  • Speed is judged by software often as when the page 'onload' event fires ( that is why it is important to load these 'pre emptive resources' via ajax.

  • Perceived speed ( usability ) is the how quickly a user can see and interact with the content ( even though the page load event may not have fired ).


Edit/update

In a few areas of the post and in the comments was mentioned the loading of these additional 'pre emptive' resources via javascript/ajax.

The reason is to not delay the page 'onload' event firing.

Many website test speed tools ( yslow, google .. ) use this 'onload' event to judge page speed.

Here we delay the page 'onload' event.

<body>

... page content

<link rel="stylesheet" href="/nextpage.css" />
</body>

Here we Load via javascript /some cases Ajax ( page data ) and do not preventing the page load event

<body>
.. page content

<script>

window.onload = function () {

var style = document.createElement( 'link' );
style.rel = 'stylesheet';
style.type = 'text/css';
style.href = '/nextpage.css';
document.getElementsByTagName( 'head' )[0].appendChild( style );

};

</script>

( this, as a bonus, also gets around the compatibility problems with having a <link> tag within the <body> as discussed in your other threads )

Does the order of rules in a CSS stylesheet affect rendering speed?

After some more testing and reading I came to the following conclusion, no, it does not matter. Even after some ‘extreme’ testing, I could not find anything that supports the idea that the order matters.

There were no 'flashed of unstyled content' or the likes, it just took way longer to load the page ( way way longer :D )

Tests I ran
I created a test page with 60.000 div elements, each having a unique ID attribute. Each of these ID’s had their own css rule applied to it. Below that I had a single span element with a CLASS attribute, which was also had a css rule linked to it.

These tests created a html file of 2MB with a corresponding css file of 6MB.

At first I attempted these tests with 1.000.000 divs and css rules, but Firefox did not approve and started crying, begging me to stop.

I generated these elements and their css with the following simple php snippets.

<?PHP

for ($i = 0; $i < 60000; $i++) {
echo "
#test$i {
position: absolute;
width: 1px;
height: 1px;
top: " . $i . "px;
left: 0;
background: #000;
} <br />
";
}

?>

And

<?PHP

for ($i = 0; $i < 60000; $i++) {
echo "
<div id=\"test$i\"></div>
";
}

?>

The result was put in a html and css file afterwards to check the results.

Mind you, my browser ( Firefox 5 ) really did not appreciate me playing around with this, it really had some issues generating the output, the occasional this program is not responding message was not afraid to show it's face.

These tests were ran on a localhost, ran by a simple XAMPP installation, it might be possible that external servers result in a different resultset, but I am currently unable to test that.

I tested a few variations on the above:

  • Placing the element before all the generated divs, in the
    middle and at the end
  • Placing the span’s css definition before, in the middle or at the end
    of the css file.

Oh and may I suggest: http://www.youtube.com/watch?v=a2_6bGNZ7bA while it doesn't exactly cover this question, it does provide some interesting details about how Firefox ( and possibly other browsers )work with the stuff we throw at it.

Single huge .css file vs. multiple smaller specific .css files?

A CSS compiler like Sass or LESS is a great way to go. That way you'll be able to deliver a single, minimised CSS file for the site (which will be far smaller and faster than a normal single CSS source file), while maintaining the nicest development environment, with everything neatly split into components.

Sass and LESS have the added advantage of variables, nesting and other ways to make CSS easier to write and maintain. Highly, highly recommended. I personally use Sass (SCSS syntax) now, but used LESS previously. Both are great, with similar benefits. Once you've written CSS with a compiler, it's unlikely you'd want to do without one.

http://lesscss.org

http://sass-lang.com

If you don't want to mess around with Ruby, this LESS compiler for Mac is great:

http://incident57.com/less/

Or you could use CodeKit (by the same guys):

http://incident57.com/codekit/

WinLess is a Windows GUI for comipiling LESS

http://winless.org/

What is faster: more classes or unique class

That depends on how much of the code that you can actually reuse, but the performance difference won't be that big.

The biggest difference between the approaches is what the classes mean. Generally you should have classes that represent what you are trying to show, not exactly what styles you use to show it. Names that represent the intent rather than the implementation fares better when you make changes.

For example if you have the class right25 and want to change the margins to 20 pixels instead, then you either will have a class name that doesn't represent what it actually does, or you have to change it to right20 everywhere that you use it.



Related Topics



Leave a reply



Submit