Google garnered impressive word of mouth among their users for one reason: it worked. Not only did its PageRank algorithms produce delightfully relevant results, but they did it with impressive speed, and the service never showed signs of buckling under the exponential growth it was experiencing.
Page and Brin had their Stanford-era frugality to thank for this robustness. Because the pair had to scrape for every machine they could find to support the early service, they were forced to optimize Google to run over off-the-shelf parts: cheap hard drives, cheap memory chips, and cheap CPUs. Instead of buying heavy mainframe artillery from the likes och IBM or Fujitsu, Brin and Page created a small army of foot soldiers: a massively parallel formation of cheap processing and storage. The beauty of the system was that it scaled: the more computers you threw at it, the more robust it became. And when a component broke down, no problem; you simply swapped it out. The system itself could never fail: there were simply too many individual parts, none of which depended entirely on the others.
Googles tre principer för “scalability“:
The key to Google’s competitive strategy is that they have the cheapest compute, network and storage (CNS) in the industry.
Cheap also means things break. And when you’ve got several million servers, lots of things break every day. Get over it. Google expects failure and builds recovery into the software layer that connects the cheap kit.
Architect for scale
Architecting for scale leverages cheap CNS to give Google the lowest-cost growth as well. Competitors such as Yahoo, who rely more on standard EDC products, can do the same things as Google, but it costs them about 10x in capital expense and several times the operations expense.