ZAB Algorithm

ZAB is an atomic broadcast algorithm, which orders and executes concurrent requests in a sequential manner.

It consists of two modes:

  • Recovery: When the service starts or after a leader failure. Ends when a leader emerges and a quorum of servers have synchronized their state with the leader
  • Broadcast: The leader is the server that executes a broadcast by initiating the broadcast protocol. Once a leader has synchronized with a quorum of followers, it begins to broadcast messages

Important related links:

BASE and ACID

BASE involves step-by-step transformation of a transactional application into one that will be far more concurrent and less rigid, it stands for “Basically Available Soft-State Services with Eventual Consistency”

  • Basically Available: Fast response even if some replicas are slow or crashed
    • BASE paper describes: In data centers, partitioning faults are very rare and are mapped to crash failures by forcing the isolated machines to reboot
  • Soft-State Service: No durable memory
    • Can’t store any permanent data
    • Restarts in a “clean” state after a crash
    • To remember data, either replicate it in memory in enough copies to never lose all in any crash, or pass it to some other service that keeps “hard state”
  • Eventual Consistency: OK to send “optimistic” answers to the external client
    • Could use cached data (without checking for staleness)
    • Could guess at what the outcome of an update will be
    • Might skip locks, hoping that no conflicts will happen
    • Later, if needed, correct any inconsistencies in an offline clean up activity

ACID is a model for correct behavior of databases with the following properties:

  • Atomicity: Even if “transactions” have multiple operations, does them to completion (commit) or rolls back so that they leave no effect (abort)
  • Consistency: A transaction that runs on a correct database leaves it in a correct (“consistent”) state.
  • Isolation: It looks as if each transaction ran all by itself. Basically says “we’ll hide any concurrency”, it runs all its operation in a sequential manner
  • Durability: Once a transaction commits, updates can’t be lost or rolled back

Related article: CAP theorem

MapReduce

MapReduce Sources:
Simplified Data Processing on Large Clusters
The Google File System
Pig Latin: A Not-So-Foreign Language for Data Processing
Hive – A Petabyte Scale Data Warehouse Using Hadoop
https://developer.yahoo.com/hadoop/tutorial/module4.html (hadoop)
http://irajain.weebly.com/uploads/2/6/0/4/26040426/13408_orig.jpg (image hadoop ecosystem)
https://developer.yahoo.com/blogs/hadoop/pig-hive-yahoo-464.html (pig or hive)

Difference between Hadoop MapReduce and Apache Spark
http://www.quora.com/What-is-the-difference-between-Apache-Spark-and-Apache-Hadoop-Map-Reduce

Server Virtualization Comparision

Please visit the following links for more details:

CAP Theorem

In a distributed system you can satisfy at most 2 out of the 3 guarantees:

  1. Consistency: all nodes see same data at any time, or reads return lastest written value by any client.
    • It means that even though there are multiple clients that are reading and writing the data. All the clients see the same data at any given point of time. The reads by any client return the latest written value by a particular client.
  2. Availability: the system allows operations all the time, and operations return quickly.
  3. Partition-tolerance: the system continues to work in spite of network partitions.
    • When the system is partitioned, some parts of the system fail, the system should still continue to work. It means that the remaining functioning partitions of system still need to guarantee both consistency and availability.
    • Partitions can happen across data-centers when the internet gets disconnected through “Internet router outages”, “Under-sea cables cut” or “DNS not working”
    • Partitions can also occur within a data-center, e.g. a rack switch outage.

References:

  1. CAP Theorem is proposed by Eric Brewer (Berkeley) in 1998:
    http://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed
  2. CAP Theorem is subsequently proved by Gilbert and Lynch (Nus and MIT) in 2002: ACM SIGACT News, Volume 33 Issue 2 (2002), pg. 51-59
  3. Wikipedia article: https://en.wikipedia.org/wiki/CAP_theorem