Is !Important Bad For Performance

Is !important bad for performance?

It shouldn't have any discernible effects on performance. Seeing Firefox's CSS parser at /source/layout/style/nsCSSDataBlock.cpp#572 and I think that is the relevant routine, handling overwriting of CSS rules.

It just seems to be a simple check for "important".

  if (aIsImportant) {
if (!HasImportantBit(aPropID))
changed = PR_TRUE;
SetImportantBit(aPropID);
} else {
// ...
}

Also, comments at source/layout/style/nsCSSDataBlock.h#219

    /**
* Transfer the state for |aPropID| (which may be a shorthand)
* from |aFromBlock| to this block. The property being transferred
* is !important if |aIsImportant| is true, and should replace an
* existing !important property regardless of its own importance
* if |aOverrideImportant| is true.
*
* ...
*/


  1. Firefox uses a top down parser written manually. In both cases each
    CSS file is parsed into a StyleSheet object, each object contains CSS
    rules.

  2. Firefox then creates style context trees which contain the end values
    (after applying all rules in the right order)

CSS Parser Firefox

From: http://taligarsiel.com/Projects/howbrowserswork1.htm#CSS_parsing

Now, you can easily see, in such as case with the Object Model described above, the parser can mark the rules affected by the !important easily, without much of a subsequent cost. Performance degradation is not a good argument against !important.

However, maintainability does take a hit (as other answers mentioned), which might be your only argument against them.

Is Performance Always Important?

Adequate performance is always important.

Absolute fastest possible performance is almost never important.

It's always worth keeping an eye on performance and being aware of anything outrageously non-optimal that you're doing (particularly at a design/architecture level) but that's not the same as micro-optimising every line of code.

Should a developer aim for readability or performance first?

You missed one.

First code for correctness, then for clarity (the two are often connected, of course!). Finally, and only if you have real empirical evidence that you actually need to, you can look at optimizing. Premature optimization really is evil. Optimization almost always costs you time, clarity, maintainability. You'd better be sure you're buying something worthwhile with that.

Note that good algorithms almost always beat localized tuning. There is no reason you can't have code that is correct, clear, and fast. You'll be unreasonably lucky to get there starting off focusing on `fast' though.

What is the performance impact of the universal selector?

In modern browsers the performance impact is negligible, provided you don’t apply slow-effects to every element (eg. box-shadow, z-axis rotation). The myth that the universal-selector is slow is a hangover from ten years ago when it was slow.

Reference: http://www.kendoui.com/blogs/teamblog/posts/12-09-28/css_tip_star_selector_not_that_bad.aspx

Which CSS selectors or rules can significantly affect front-end layout / rendering performance in the real world?

The first thing that comes to mind here is: how clever is the rendering engine you're using?

That, generic as it sounds, matters a lot when questioning the efficiency of CSS rendering/selection. For instance, suppose the first rule in your CSS file is:

.class1 {
/*make elements with "class1" look fancy*/
}

So when a very basic engine sees that (and since this is the first rule), it goes and looks at every element in your DOM, and checks for the existence of class1 in each. Better engines probably map classnames to a list of DOM elements, and use something like a hashtable for efficient lookup.

.class1.class2 {
/*make elements with both "class1" and "class2" look extra fancy*/
}

Our example "basic engine" would go and revisit each element in DOM looking for both classes. A cleverer engine will compare n('class1') and n('class2') where n(str) is number of elements in DOM with the class str, and takes whichever is minimum; suppose that's class1, then passes on all elements with class1 looking for elements that have class2 as well.

In any case, modern engines are clever (way more clever than the discussed example above), and shiny new processors can do millions (tens of millions) of operations a second. It's quite unlikely that you have millions of elements in your DOM, so the worst-case performance for any selection (O(n)) won't be too bad anyhow.


Update:

To get some actual practical illustrative proof, I've decided to do some tests. First of all, to get an idea about how many DOM elements on average we can see in real-world applications, let's take a look at how many elements some popular sites' webpages have:

Facebook: ~1900 elements (tested on my personal main page).

Google: ~340 elements (tested on the main page, no search results).

Google: ~950 elements (tested on a search result page).

Yahoo!: ~1400 elements (tested on the main page).

Stackoverflow: ~680 elements (tested on a question page).

AOL: ~1060 elements (tested on the main page).

Wikipedia: ~6000 elements, 2420 of which aren't spans or anchors (Tested on the Wikipedia article about Glee).

Twitter: ~270 elements (tested on the main page).

Summing those up, we get an average of ~1500 elements. Now it's time to do some testing. For each test, I generated 1500 divs (nested within some other divs for some tests), each with appropriate attributes depending on the test.




The tests

The styles and elements are all generated using PHP. I've uploaded the PHPs I used, and created an index, so that others can test locally: little link.




Results:

Each test is performed 5 times on three browsers (the average time is reported): Firefox 15.0 (A), Chrome 19.0.1084.1 (B), Internet Explorer 8 (C):

                                                                        A      B      C
1500 class selectors (.classname) 35ms 100ms 35ms
1500 class selectors, more specific (div.classname) 36ms 110ms 37ms
1500 class selectors, even more specific (div div.classname) 40ms 115ms 40ms
1500 id selectors (#id) 35ms 99ms 35ms
1500 id selectors, more specific (div#id) 35ms 105ms 38ms
1500 id selectors, even more specific (div div#id) 40ms 110ms 39ms
1500 class selectors, with attribute (.class[title="ttl"]) 45ms 400ms 2000ms
1500 class selectors, more complex attribute (.class[title~="ttl"]) 45ms 1050ms 2200ms



Similar experiments:

Apparently other people have carried out similar experiments; this one has some useful statistics as well: little link.



The bottom line:

Unless you care about saving a few milliseconds when rendering (1ms = 0.001s), don't bother give this too much thought. On the other hand, it's good practice to avoid using complex selectors to select large subsets of elements, as that can make some noticeable difference (as we can see from the test results above). All common CSS selectors are reasonably fast in modern browsers.

Suppose you're building a chat page, and you want to style all the messages. You know that each message is in a div which has a title and is nested within a div with a class .chatpage. It is correct to use .chatpage div[title] to select the messages, but it's also bad practice efficiency-wise. It's simpler, more maintainable, and more efficient to give all the messages a class and select them using that class.




The fancy one-liner conclusion:

Anything within the limits of "yeah, this CSS makes sense" is okay.

Is the &method(:method_name) idiom bad for performance in Ruby?

Yes, it appears to be bad for performance.

def time
start = Time.now
yield
"%.6f" % (Time.now - start)
end

def do_nothing(arg)
end


RUBY_VERSION # => "1.9.2"

# small
ary = *1..10
time { ary.each(&method(:do_nothing)) } # => "0.000019"
time { ary.each { |arg| do_nothing arg } } # => "0.000003"


# large
ary = *1..10_000
time { ary.each(&method(:do_nothing)) } # => "0.002787"
time { ary.each { |arg| do_nothing arg } } # => "0.001810"


# huge
ary = *1..10_000_000
time { ary.each(&method(:do_nothing)) } # => "37.901283"
time { ary.each { |arg| do_nothing arg } } # => "1.754063"

It looks like this is addressed in JRuby:

$ rvm use jruby
Using /Users/joshuajcheek/.rvm/gems/jruby-1.6.3

$ xmpfilter f.rb
def time
start = Time.now
yield
"%.6f" % (Time.now - start)
end

def do_nothing(arg)
end


RUBY_VERSION # => "1.8.7"

# small
ary = *1..10
time { ary.each(&method(:do_nothing)) } # => "0.009000"
time { ary.each { |arg| do_nothing arg } } # => "0.001000"


# large
ary = *1..10_000
time { ary.each(&method(:do_nothing)) } # => "0.043000"
time { ary.each { |arg| do_nothing arg } } # => "0.055000"


# huge
ary = *1..10_000_000
time { ary.each(&method(:do_nothing)) } # => "0.427000"
time { ary.each { |arg| do_nothing arg } } # => "0.634000"


Related Topics



Leave a reply



Submit