Social Media can be ‘Safe for Work’ if…

How many times you have been through this situation? You want to do some ‘real work’ on social networks, maybe send an IM to a co-worker or upload some of your recent work like blog posts, illustration, photos, video etc. or post some work related status. So you log into respective account and before you understand anything you start checking your notifications, news feed and what others have posted. And after 30 mins of scrolling through never-ending stream you realise you have forgotten the reason you logged into your account. You just wasted 30 mins of your time and you have no idea why. I am not just talking about Facebook or Twitter, I am talking about even profession related social networks like Dribbble, Behance, Soundcloud, 500px or even Flickr.

It is a sad truth but social media is really addictive and it is meant to be that way. Each social network creates a habit in you of checking what others are doing, how many likes you have and what not; this is the biggest con of any social network. I am not pessimist about social networks. There is no denial that social network has a lot of potential, it can topple the nation upside down. But as far as we are concerned what we do online is our new resume. Hence it is important to not only to have active social network accounts but your work also should be showcased on appropriate social networks. Let it be Github, Behance or Dribbble, your social profile represents the true talents of yours.

As said earlier, it is easy to get distracted on social networks. But there are some good tools which help you with it especially when you are publishing your work.

1. Buffer — Buffer is the perfect abstraction for your social networking and work. It helps you to share on various social media like Facebook, Twitter, Google+, LinkedIn and App.Net. If you have written a new blog or want to share some images like a photograph or illustration of the latest work; just add them in your Buffer and it will be posted everywhere for you by Buffer. No need to login to social networks just to post it. Great thing about Buffer is, you can even schedule your post on which day and time you want to publish it.

Buffer's browser extension in actionBuffer also provides analytics about Retweets, Favourites, Likes, Shares and Potential Reach, because this is also a primary reason we login to social networks to see how our post is doing.

2. IFTTT — IFTTT means “If This Then That”. Let’s say you have recently created a new sound on Soundcloud then in IFTTT you have to add recipes like “Tweet when new Sound is added” or “Share on Facebook when new Sound is created”. IFTTT provides thousands of such recipes where you have to specify triggers and actions to perform on these triggers.

IFTTT in action

So this way you can connect profession related social networks like 500px, SoundCloud, Behance and when you post your work there, it will be automatically posted to your Twitter, LinkedIn or Facebook Profile/Page.

3. Zapier — Zapier is also similar to IFTTT but Zapier has more app integrations than IFTTT. It includes all apps from social media, email, project management to CRM. It is more focused towards Small and Medium Enterprises rather than consumer. But again it can also help you to automate your work just like IFTTT.

Zapier's SoundCloud-Twitter Zap

On Zapier automated recipes are called as ‘zaps’. Head over to their ‘Zapbook’ to see all the supported services.

With tools like Rapportive, it has become really easy to find all your social network profiles to get the glimpse of who you really are and what you have done. Hence having these automation apps is really crucial. They not only help you to maintain your social media profile but also help you to prevent yourself from getting distracted by constant social network notifications.

These are one of the key principle I keep in mind while building Expojure. We want creative minds like Photographers to publish their work on social media sites like Flickr, Facebook, 500px etc. without getting distracted by activities on social networks. Because such automation apps help us to use social network for the real reason of “Connecting People”, after all this is why Social Networks are built for, right?


Facebook’s ‘A Look Back’ video is composed in CSS3

Yesterday evening when I got notification from Facebook that “A Look Back” video was ready. I decided to give it a shot. Without thinking much about it, I simply hit the play button and as video progressed I realized some of the images were half loaded. That was weird. If the video is being played then how ‘image loading’ is happening. Like any curious web developer, I took out ‘our’ Swiss Knife a.k.a. Chrome Devtools. And voilà, it was nothing but CSS3. I was completely jaw-dropped, whole video was composed in PURE CSS3 and JavaScript. Who would have thought that?

Go check yourself. Visit, open devtools and play the video (if your video is not yet ready then you will only see tiles of images).


As you can see the DOM tree of photo tiles. Links, position and dimensions of the image are visible on the right.

scale and translate

Throughout the video scale and translate functions are primarily used to do create moving and scaling effects.


No surprise there as your statuses are treated as just another SPAN element.

Object object

Apparently code is not bug free (if you know what I mean)


Yes, song is played via audio tag. You play that song in a new tab. I have copied the link, just click here.

After some observations, I realized there is no use of transition. All the ‘easing’ related calculations are done via JavaScript and applied dynamically. That is weird for sure. I am heavily using transitions in my startup Expojure for various animations and results couldn’t be better. Even John Resig recommended to have CSS Transitions over JavaScript based animations.

But the major question remains, how whole animation sequence is converted in a video? Because when you ‘Share Your Movie’, it is nothing but a traditional video. From my understanding, it must be done using headless browser on top of which screen capturing is happening. Let us assume PhantomJS is used as a headless browser as it is the most popular one out there. PhantomJS supports screen capture. So snippet like –

var page = require('webpage').create();'', function() {

would create a single snapshot of the page. Considering the video is shot on 60 FPS with duration of 60 seconds, about 3600 images must have been taken using screen capture. These stack of images can be converted in a video using libraries like OpenCV. Whole concept is just my assumption. Maybe the real process is different or simpler than this. I would love to know your comments on this.

It is undoubtedly some of the finest software engineering. It also makes sense, as unlike Google+, Facebook is giving you choice of photos and status you want to include in the video. So it is just storing a list of posts on the database and using that list, whole DOM animation is rendered on the servers to create a video. However I am not sure if CSS was the obvious choice but from my experience with 3DTin, obvious choice would have been Canvas API as we partly worked on WebGL based animation.

What really impressed me was, implementation of KISS Principle. Instead of creating a video editor (in Canvas API), Facebook kept everything simple and straightforward from both Developer’s and Viewer’s perspective. It lets you select the posts you want to see and the script takes care of rest of the things. From a web developer’s perspective, it is indeed a top-notch work. After all why would you build a video editor just to let users to customize their montage (which they are going forget after few months).

Kudos to Facebook’s Engineering Team. Great Work.



Guidelines for better and faster CSS

Gone are the days when we used to write arbitrary CSS without considering much about its structure. As developing web applications is becoming more and more common it’s important to organize CSS in proper way. Organising CSS is not only beneficial for rest of the team but it can also significantly impact the performance of your web application. CSS of a web application can go well beyond 10k lines, in such scenarios it becomes difficult not only to manage but also to scale the code without breaking any other components of the page. Philip Walton said on the architecture of CSS [1] 

Goals of a good CSS architecture shouldn’t be that different from the goals of all good software development. CSS should be predictable, reusable, maintainable and scalable.


Output of CSS should be predictable. While working with teams, regression happens all the time. Especially in CSS where architecture is not properly defined then one minor change can bring undesirable effects by breaking up the components from other pages.


Reusability is the essential part of software development. Whether it is Object Oriented Framework or MVC, reusability or ‘Do not Repeat Yourself’ is an essential part of Programming Architecture. It also applies for CSS. Rules should be defined in such a way that they should be reusable. It also helps to reduce development time and brings consistency to User Interface design of the application.


Updating the CSS can also bring hassle. By updating one rule can break another component. It should not happen in a proper architecture.


It should not matter if development team consists of a single person or large team, adding new rules should not bring regression. Architecture of stylesheet should look so obvious that any one should be able extend the code without going through reading documentation.

People often complain that it’s easy to go through thick Computer Science books and talk about good programming practises which are never implemented in the real world. But there are some popular open source projects like jQuery UI, Twitter Bootstrap or WordPress[2] which follow these practises.

Object Oriented CSS

Object Oriented Principles consist of abstraction, encapsulation, inheritance, polymorphism and reusability. All these principles can also be applied to CSS. While defining the rules one should always make sure that skin and layout should not be come together. There should be separate classes for skin and layout, if it is followed then it becomes easier to extend and maintain styles.

.button {
  font-size: 1.125em;

  border: none;
  border-radius: 1.25em;
  cursor: pointer;
  text-decoration: none;
  text-align: center;
  outline: none;

  vertical-align: middle;
  display: inline-block;
  padding: 8px 16px;
  margin: 1.5em 0;

.button-primary {
  background-image: -webkit-linear-gradient(top, #F1394C 5%, #BC3135 100%);
  background-image: -moz-linear-gradient(top, #F1394C 5%, #BC3135 100%);
  background-image: -ms-linear-gradient(top, #F1394C 5%, #BC3135 100%);
  background-image: -o-linear-gradient(top, #F1394C 5%, #BC3135 100%);
  background-image: linear-gradient(top, #F1394C 5%, #BC3135 100%);
  background-color: #F1394C;

  border-bottom: 1px solid #2A0E00;
  color: #FFCCCC;

.button-large {
  font-size: 1.5em;
  padding: 12px 20px;

Subsequent HTML for implementation of classes will look like

<!-- Standard Button -->
<a class="button button-primary" href="#">Standard Button</a>
<!-- Large Button extended from Standard Button-->
<a class="button button-primary button-large" href="#">Large Button</a>

As you can see above, layout related rules are mentioned in .button class while skin related rules are mentioned in .button-primary. Due to this reason it becomes easier to extend the .button. If one decides to add a different color for the button then there will be a separate class with a different color scheme by keeping base rules such as padding, margin, alignment, etc. unchanged. If one decides to add larger or smaller button then existing rules such as font-size, padding can be overridden and inherit the rest of the rules. 

Such approach makes sure that buttons used in whole application will be consistent with the design as well as easier to extend which can prevent regression.

Modularity and Compartmentalization

There can not be a worse nightmare than finding rules for a selector, scattered in a file with 20k lines of code. As code base grows it is essential to compartmentalize the rules depending upon which components it is dealing with. Jonathan Snook has provided a really nice solution for modularity in his book Scalable and Modular Architecture of CSS (SMACSS)[3]. He proposed to divide code in five components viz base, layout, modules, state and theme. But maintaining five different components can also lead to confusion especially among large teams. I reduced them to four components.


It will be a collection of base rules for elements such as body, headers, input, paragraph etc. These rules won’t change much. Following example can provide broader perspective-

* {
  -webkit-box-sizing: border-box;
  -moz-box-sizing: border-box;
  box-sizing: border-box;
body {
  font-family: Helvetica, Arial, sans-serif;
  font-weight: 200;
  font-size: 16px;
  background-color: #FCFCFC;
  color: #58585b;

  margin: 0;
  padding: 0;
h1 { font-size: 2.25em; }
h2 { font-size: 1.5em; }
h3 { font-size: 1.125em; }
h4 { font-size: 0.875em;  }

pre {
  font: 0.75em Monaco, monospace;
  line-height: 1.5;
  margin-bottom: 1.5em;
code {
  font: 0.75em Monaco, monospace;
p {
  margin: 4px 0 12px 0;
a, a:visited, a:link {
  color: rgb(0, 173, 255);
  text-decoration: none;
a:hover, a:active {
  color: rgb(0, 143, 255);
  text-decoration: none;


Collection of rules dealing with the layout and structure of sections. It can involve margin, dimensions. It should not involve anything related to colours, fonts, decoration; strictly layout only. In case of grid, layout should look like following example

.grid {
  display: table;
  margin: 10px;
  padding: 5px;
.grid-column {
  float: left;
  padding: 5px;
.grid-column-4 {
  width: 200px;
.grid-column-5 {
  width: 175px;


Collection of rules which decide color, font, shadows etc. of the element. It specifically deals with how a module will look like. Above example of grids can look like

.grid {
  overflow: auto;
.grid-white {
  -webkit-box-shadow: 0 -1px 0 #FF9999;
  -moz-box-shadow: 0 -1px 0 #FF9999;
  -o-box-shadow: 0 -1px 0 #FF9999;
  -ms-box-shadow: 0 -1px 0 #FF9999;
  box-shadow: 0 -1px 0 #FF9999;

  background-color: #F1394C;
  border-bottom: 1px solid #2A0E00;
  color: #FFCCCC;


These rules will be usually implemented in JavaScript. It can include rules related to state, visibility, animation etc. e.g.

.hide {
  display: hide;
.animate {
  -webkit-transition: 0.3s all ease-out;
  -moz-transition: 0.3s all ease-out;
  -o-transition: 0.3s all ease-out;
  transition: 0.3s all ease-out;
.fadein {
  visibility: visible;
  opacity: 1;
.fadeout {
  visibility: hidden;
  opacity: 0.9;

Minimised Specificity

Before moving to specificity, it is important to understand how browser works[4]. Understanding how DOM construction and painting is done by browser can help us to create a better structure for code base. It will also be helpful to do optimizations.

Rendering workflow of a webkit browser

Rendering workflow of a webkit browser (Courtesy-HTML5 Rocks)

As shown in diagram, HTML and CSS parser takes care of parsing and validating corresponding HTML and CSS. While each node of the render tree is being constructed, rendering engine (in this case webkit) goes through all the parsed stylesheet every time to create a node i.e. if there are nodes then rendering engine goes through whole stylesheet times to implement proper rules for respective node. Of course there are optimizations performed internally which may vary browser to browser but in layman terms this is how render tree is constructed. Due to the fact that browsers go through style rules every time, they match selectors from right to left[5].

Right to Left Selector Matching Flowchart

Right to Left Selector Matching Flowchart

As shown above rendering engine goes through all the rules for each node construction. It first checks if the key selector (i.e. extreme right selector) matches with current node or not? If it is matched then it goes to adjacent selector to check if it is the parent of the current node. If any of the condition returns false then rendering engine moves to next rule otherwise the current rule is applied to the node. Hence the more number of adjacent selectors or in other words more the level of specificity  more time it will take to complete the process.

Once the render tree is constructed, it is further sent for painting and eventually dumped on the screen. This process is not a onload process. It happens all the time. When you add/remove elements in the DOM Tree dynamically or add/remove certain set of rules, painting and reconstructing of tree happens on such occasions. In case of single page application frequency of painting and reconstruction increases even more as components like modals, alerts, containers are generated dynamically.

Having just one level specificity is desired approach but 3 level specificity is also acceptable. Because sometimes it becomes cumbersome to manage 1 level specificity.

body .container div span a {
  /* BAD Approach */
.container-link {
  /* Correct Approach */

In case of ID, its important to have only one level specificity. IDs are suppose to unique in the document hence there is no need to mention parent selectors.

.container #special-element {
  /* BAD Approach */
#special-element {
  /* Correct Approach */

Use Classes

There are four types of selectors

  1. ID Selector
  2. Class Selector
  3. Element Type Selector
  4. Universal Selector

According to the W3 Spec[6] an ID Selector should be unique in the document. Being unique in the document, ID selector is the most fastest selector. As long as reusability is concerned ID prevents us from doing it. Having poor naming conventions can lead us to add duplicate IDs which browser fixes internally.

A selector which comes most close to Object Oriented CSS is Class Selector. If proper modularity and naming conventions are applied then every class can be reused. Class selector is second fastest to ID but it is the most effective way to organize your code.

Considering how browsers work, having Element Selector is not a good idea. Every time when a node matches definition with element type, rendering engine is going to stop at that rule and start iterating over adjacent selectors. Consider that there are 100 divs present in a DOM and a style rule has div as key selector then every time a div node is being constructed, rendering engine will halt at the rule with div being key selector, which is waste of time.

The last thing you want in a document is Universal Selector. If there is no strong reason to have Universal Selector, it should not be added in the document. It is the most expensive selector. Suppose there are nodes present in the document then rendering engine will go through the rules consisting Universal Selector times.

Considering all the scenarios, it might look good to use ID as it is the fastest selector. But ID will prevent us from implementing Object Oriented Paradigms. ID selectors should be used for JavaScript only. Using class selectors in CSS will be helpful to maintain and scale the stylesheet which we discussed above.

Naming Conventions

Naming convention plays important part to implement reusability and minimized specificity. If meaningful and unique names are given to selectors then minimized specificity and reusability can be achieved. It must have happened numerous time that duplicate names are assigned to a class or worst case to ids e.g. #title, #container etcThis problem can be solved by having namespacing of elements in the name itself. Having hierarchy of nodes in the name will not only make it unique but also meaningful. Consider an example of gallery below –

.gallery { /* Rules for gallery */ }
.gallery-montage { /* Rules for container */ }
.gallery-montage-title { /* Rules for title */ }
.gallery-montage-thumbnail { /* Rules for thumbnail */ }

It is not necessary to have really big hierarchy in name itself, it should be according to your convenience but at most 3 level hierarchy seems acceptable. Namespaced names helps to give idea about that node in just a glance and it also helps to achieve unique naming. It’s better to have long names rather than having long specificity. Such practise is beautifully implemented in jQuery UI CSS. I recommend to go through jQuery UI to see it in action.

Compress, Sprite and Font Icons

These are the most common optimization techniques recommended. There are some several popular articles on explaining several ways and tools for compression[7]. I won’t go in much detail of this process.


In order to take less time to load content from network, gzip compression recommended. All the content which is supposed to be served should be gzipped on the server. It is an effective and common practise to reduce loading time. Image compression can also reduce size of the image substantially. Some of the popular tools which I use for image compressions are and ImageOptim. There is also a way to compress JavaScript and CSS files. Tools like Google Closure Complier, YUI Compressor can reduce size of your project files. They are capable of eliminating dead code, remove empty spaces, comments and convert it to reduced files. An open source project,; curated by Paul Irish and Addy Osmani is capable of doing all these compressions with one command. That tool is specially designed for managing web applications and compression technique is one of the main feature.

Image Sprites

Instead of loading hundreds of images, just load one image which consists of sprite or mosaic of required hundred images and use background-position property to render them on the screen. When I came across this technique I was completely jaw dropped. Glue is one tool for generating image sprites and corresponding CSS. This technique can reduce bandwidth substantially.

Font Icons

Font Icons can be alternative to image sprites. All the common iconset such as Social Media, Toolbar can be used via Font. Advantage of using fonts is that it will not only render well on retina display but you can also alter its dimensions and color using CSS properties. Font Icons are also useful if your webpage follows flat design. This technique was made popular by GitHub’s Octicons[8]. Some popular ways to create font icons are Font Awesome and Fontello or you can create your own fonts using tools like Adobe Illustrator.

As far as compression and optimization techniques are concerned Google Chrome’s Developer Tools come to rescue. By doing ‘Audit’ of your website via DevTools, you can get a list of optimizations you are supposed to perform. Just open DevTools, click on Audit tab and Hit Run.

DevTools in action

DevTools in action

Enable CSSLint

There had been some speculation whether to use CSSLint or not[9]. It is sometimes annoying. Even CSSLint’s webpage says

CSSLint will hurt your feelings*

But I recommend you to use it. It helps you to fix your rules. Before CSSLint I was not aware that in case of display:table-cell, there is no need to add margin property. I used to ignore it because code just worked. CSSLint can help you to figure out such numerous problems. However CSSLint throws warning on certain conditions which may not sound right to you. These conditions incluse use of IDs, styles for header tags, ignorance of box-model etc. But you can always ignore these warnings (after all warnings are there to ignore ;)).

Best way to use CSSLint will be to install CSSLint plugin/extension for your text editor. I use Sublime Text, it has a great package called ‘SublimeLinter‘ which comes with linters for almost all popular languages. In case of Sublime Text, you will need to install CSSLint via Node Package Manager. Once you have CSSLint, you can disable certain rules in Sublime Linter Preferences. Here are my preferences-

        "adjoining-classes": "warning",
        "box-model": false,
        "box-sizing": false,
        "compatible-vendor-prefixes": "warning",
        "display-property-grouping": true,
        "duplicate-background-images": "warning",
        "duplicate-properties": true,
        "empty-rules": true,
        "errors": true,
        "fallback-colors": "warning",
        "floats": "warning",
        "font-faces": "warning",
        "font-sizes": "warning",
        "gradients": false,
        "ids": false,
        "import": "warning",
        "important": "warning",
        "known-properties": true,
        "outline-none": "warning",
        "overqualified-elements": "warning",
        "qualified-headings": "warning",
        "regex-selectors": "warning",
        "rules-count": "warning",
        "shorthand": "warning",
        "star-property-hack": "warning",
        "text-indent": "warning",
        "underscore-property-hack": "warning",
        "unique-headings": "warning",
        "universal-selector": "warning",
        "vendor-prefix": true,
        "zero-units": "warning"

I hope this guide was helpful for you. If you have any questions or if you find any bugs in this article, feel free to add a comment below.


  1. CSS Architecture by Philip Walton
  2. CSS source code/architectures of jQuery UITwitter BootsrapWordPress
  3. Scalable and Modular Architecture for CSS by Jonathan Snook
  4. How Browsers Work by Tali Garsiel & Paul Irish
  5. Why do browsers match CSS selectors from right to left? – Stackoverflow
  6. Global structure of an HTML document by W3.Org
  7. Best Practises for Speeding up your Website by Yahoo Developers Network
  8. Making of Octicons by Github
  9. CSSLint on Hacker News

Private Function in Javascript & Coffeescript

I have not worked in Java but I used to do in when I was doing my graduation. One thing I always liked about Java is encapsulation. It provides really good encapsulation in terms of readability. I don’t know if following code will work or not but this is prototype of encapsulation in Java


I miss that feature in JavaScript. You can create private function in JavaScript but it is not as intuitive or obvious to read. Now before moving ahead you should know how Object Oriented Programming is done in JavaScript or at least how prototype is used. This is how I used to do encapsulation


But it does not give you the notion of Encapsulation which you get while coding in Java. So Coffeescript comes to rescue. Take a look at gist below

If you see the layout of code it does look like pure encapsulation supported by Object Oriented Languages like Java or C++. Look at the definition of doPrivateThings it has = in it instead of ‘ : ‘ which makes it private function. However context of that function will be window. So by doing you are also making sure that context to that function will be of instance of person class, so that in this case other members like name will be accessible todoPrivateThings Isn’t this cool? Now hang on a moment, JavaScript code which Coffeescript Compiler generates is even cooler. It is one of the best JavaScript Hacks I have come across

First person is defined as a variable which assures that person will be accessible anywhere in that scope. Now on line 2 it has been assigned a self executing anonymous function (something which you don’t see in traditional languages). Then inside local instance of person is created and further snippet is exactly same as I have shown in snapshot earlier. In this scope person acts as a class (as its prototype is defined) and same person object is assigned to person from outer scope with help of return which makes outer person closureSince person is now a closure you can still access doPrivateThings by maintaining the context.

It is not great in terms of readability but it is intricate and brain teasing. I don’t know how many of you actually understood the concept here as it is really hard to understand in one read.

I found a good thread on stack overflow about to it in which another method is described which can do encapsulation.


CSS Optimization, digging into CSS Engine

Yesterday I asked a question on Stackoverflow as “difference between using MULTIPLE ‘id’ and ‘class’ attributes in HTML and CSS”. Though the answer was quite obvious i.e. ‘Standard says so‘ I actually asked that question to get some good reads for the weekend and Stackoverflow did not disappoint me as BoltClock gave me some real good resources to dig about CSS Parsing.

With rise of HTML5 more and more startups are looking at web as platform and making Web Applications. This further leads to construction of complex DOM Tree. Hence in such cases it is important to have CSS Optimization. This blog post emphasizes on CSS Optimization.

Before optimizing your css its important to understand how exactly browser and how CSS Engine works. When browser executes the HTML file. It first parses HTML then CSS, makes sure that it is error free followed by DOM Tree Construction where whenever a new node is created subsequent style for that element is applied from CSS [1]. So every time when a new node is created CSS Engine always go through whole CSS to find out any matching selector for that element. Due to this reason traversing the CSS rules is done from right to left instead of left to right [2]. I went too fast here. Let me explain you that with an example:


Consider above diagram as simple webpage and the styles for above diagram is given below –

#maincontent #container1 #subcontainer-right { 
    /* Bunch of Rules */ 

#maincontent #container1 #subcontainer-left {
    /* Bunch of Rules */ 

#maincontent #container1 #subcontainer-left div { 
    /* more Bunch of Rules */ 

#maincontent #container1 #subcontainer-right div { 
    /* even more Bunch of Rules */ 

When browser starts construction of DOM Tree it will read HTML Tag then create one node then will look for CSS Rules for that element in css. It will do this for each line throughout the HTML. ie it will go through whole CSS for each HTML Tag. Offcourse there will be some algorithms implemented by browsers to avoid redundancy. Now suppose browser is constructing #subcontainer-left and wants to apply CSS Rules to #subcontainer-left what will happen if rules are traversed from Left to Right-

As I told you that for each Node Creation all CSS rules are traversed. First CSS Engine will go to #maincontent, it will check if #maincontent is same as #subcontainer-left? No. Then if #maincontent is the parent of #subcontainer-left if yes then it will reach #container1 it will check if #container is parent of #subcontainer-left then it will reach #subcontainer-right then it will realize that #subcontainer-right ≠ #subcontainer-left hence it will stop execution and will move to next line same procedure again and again. Till it applies all the rules which matches current node.

Now lets see what happens when it starts traversing from Right to Left-

It will check if #subcontainer-right is same as #subcontainer-left? No then move on go to next line. Ohh, There you are. So it will move to next selector ie #container1, is #container1 parent of #subcontainer-left? Yes. Then it will move to next selector. Is #maincontent parent of #subcontainer-left? Yes. All Confirm then apply the rules.

As you can see Right to Left saves a lot of time than Left to Right. As the it traverses whole rule only when it matches selector with current node.

So if we want to perform optimization then we have taken above procedure under consideration. Suppose I want to give some style to all those cells from ‘#subcontainer-left’, what are possible scenarios to giving CSS Rules to those Cells?

1. #maincontent #container1 #subcontainer-left * :‘*’ is known as Universal Selector it means any possible element. Using ‘*’ anywhere is as close to signing your own Death Warrant by your hand. (Watching too much crime investigation related shows these days 😉 ). Since rules traversing is done from right to left, each time CSS Engine will halt on line containing ‘*’ as key (Extreme right selector for whom you have defined the rules is known as key [3]) then it will move to its right and will perform the binary search to check if parents of that selector and current node are equal or not. This will happen everytime for each node construction.

2. #maincontent #container1 #subcontainer-left div This one is also wrong approach. Whenever div element is constructed it will halt here. Perform binary search for parents making sure that parents are same. It will still make everything expensive, as it will execute this rule for no reason when div is encountered.

3. #maincontent #container1 #subcontainer-left .cellThis approach is much bettercompare to earlier ones, but it still will be expensive. The reason would be binary search forparents. Even thought .cell is unique but still it will perform Binary Search for parents forconfirmation. This is kind of approach which you can use but trust me you can do evenmore optimization.

Best way would be to make everything unique. From Class to ids everything should be unique with no parent selectors. e.g. simply .cell {  } will make everything much faster as there will be no binary search for parents. But the problem will be of readability. Making everything unique will require proper naming conventions otherwise it will become hard to maintain the code. So make your own naming conventions according to your convenience. In above case I will prefer something like ‘.maincontent-container-cell’. You can see this name is made by simply considering the hierarchy of the elements. It will definitely make you easy to understand where exactly a node belongs. I saw this approach in one the jQuery UI plugins. However if name becomes to big you can do obfuscation later on. But having long names is always better than performing binary search.

I even got more confident about this theory when I saw Gmail’s Source Code. I’m adding snapshot of developer’s tool window.


You will see that there are no parent selectors. All the class names are obfuscated but having no parent selectors is what makes Gmail faster even after having too much complicated DOM Structure. You can see how deep the whole DOM is and on left the element has unique name and no parent selector. This blog may sound as premature optimization. But you will think like that because you are not aware how much intricate DOM Tree can become.

If you disagree with me or you find any flaws in the post feel free to comment. I will be happy to work on it.

References :

[1] How Browsers Work: Behind the scene of Modern Web browsers – HTML5Rocks


[3] Optimize browser rendering – Google Code



How to move ‘/home’ to a new partition without re-installing an existing Linux distribution

Having entirely different partition for ‘/home’ is always beneficial. As home is the directory where all your Media and Documents are. By keeping ‘/home’ inside ‘/’ there is a possibility that with switching to new Linux Distro might cause loss of data however in Ubuntu to Ubuntu Switch it does not happen. But if you add a new HDD and wants all your data to be shifted in new Volume it becomes cumbersome to every time to mount and manage the data. So here are some steps which you need to follow to move ‘/home’ to new partition.

Copy your ‘/home’ to new Partition:

Open your Terminal and Run

df -h | grep ^/dev

This will give you list of all the Partitions mounted in. Copy the path of the desired partition (it will be /media/LABEL_NAME). Then copy your home folder to desired partition.

$ sudo rsync -axS --exclude='/*/.gvfs' /home/. /media/LABEL_NAME/.

Reason why we are using ‘rsync’ instead of ‘cp’ to copy the contents is that ‘rsync’ maintains permissions also.

Find out the uuid of the Partition:

UUID stands for Universal Unique Identifier, and every partition has its own uuid.

$ sudo blkid


This is will give you output similar to shown in this Snapshot. Copy the UUID of the Partition you want your ‘/home’ to be in and keep in my mind the type of Partition (I.e ext3, ext4, ntfs)

Prepare the switch :

Before moving ahead make sure to take a copy of ‘fstab’. Run-

$ sudo cp /etc/fstab /etc/fstab_bckup

Now open ‘/etc/fstab’ with your favorite text editor

If you are good with vim the do

$ sudo vim /etc/fstab

or do

gksu gedit /etc/fstab

Add following line in fstab-

UUID=????????   /home    TYPE          nodev,nosuid       0       2

where ‘?????’ is the uuid of your partition and type is a type of partition (ext3, ext4 etc.) Save it and close it.

Now you are ready to switch but before that we have taken backup of ‘/home’ and need to empty it.

Follow Sequence of commands –

$ cd /

$ sudo mkdir /home_bckup

$ sudo mv /home /home_bckup

$ cd /

$ sudo mkdir -p /home

Now you have backup of ‘/home’ and ‘fstab’ so its time to reboot. If you have followed instructions properly then next time when Log in everything will be the way you had earlier. Make sure everything is working properly. If you have applications like Dropbox then you might have to reconfigure it (It will be prompted by Dropbox itself) so that Dropbox will re-index your directories.


Hacking Tikona WiFi to get WiFi access from Ubuntu

After having dreadful experience with Tikona Customer Care, this morning I registered 4th complaint of this week knowing that there won’t be answer from them nor their ‘Quack Mechanics’ will be able to solve them. So I decided to hack it. After going through various FAQs on Tikona Portal I came to know what is the configuration of it and I finally managed to connect too.

Distro which I’m using is Ubuntu 11.04.

Here are steps  –

Step 1:  

First go to ‘Network Manager’ then click on ‘Connect to Hidden Wireless Network’.


Step 2: 

Then you will get similar window. Give ‘Network Name’ whatever you want and select ‘Wireless Security’ as ‘WPA and WPA2 Enterprise’ as Tikona uses WPA Encryption. 




Step 3: 

Then it will open a New Window which will look simlar to below. You have to fill up the details as shown in the snapshot. Select ‘Authentication’ as ‘Tunneled TLS’. Select ‘Inner Authentication’ as ‘MSCHApv2’. Then you have to enter your ‘User Name’ and ‘Password’ provided by Tikona Digital Networks. Once details are filled up as shown in snapshot then hit ‘Connect’.



Step 4: 

After couple of seconds this Notification will appear. Then go to your browser and Normal Login Page will appear which we see while connecting via Ethernet. 



Thats it! Happy Surfing!!