Mapping Technology Trends to Enterprise Product Innovation

Scope: Focusses on enterprise platform software: Big Data, Cloud platforms, software-defined, micro-services, DevOps.
Why: We are living in an era of continuous change, and a low barrier to entry. Net result: Lot of noise!
What: Sharing my expertise gained over nearly two decades in the skill of extracting the signal from the noise! More precisely, identifying shifts in ground realities before they become cited trends and pain-points.
How: NOT based on reading tea leaves! Instead synthesizing technical and business understanding of the domain at 500 ft. 5000 ft., and 50K ft.

(Disclaimer: Personal views not representing my employer)

Friday, August 16, 2013

Master-based Scale-out Storage Architecture


As a recap, in a Master-based taxonomy, the cluster has a special node cluster called the Master. The other nodes are typically referred to as Slaves, or Chunk servers (as in HDFS/GFS), or simply nodes (we use that terminology in this blog).  In this taxonomy, the Master maintains the global state as well as coordinates the activities related with the cluster and namespace management. For normal IO operations, clients will typically connect with the Master initially for metadata information, but can then perform IO operations directly on the nodes.  


     In contrast other taxonomies, Master-based is attractive due to the level of simplicity in the cluster coordination protocol. In other words, given the global state maintained by the Master, the consensus management and coordination is radically simplified. Further, it allows implementing rich policies for data placement and resource allocation, since the location mapping is persisted in a directory, rather then dependent of a mathematical key-based routing protocol (such as consistent hashing, chord, CAN, etc.) as in the case of masterless clusters. 


    The obvious downside of Master-based is the SPOF vulnerability i.e., if the Master dies or is network partitioned, there is potential downtime, and even data loss of the recent transactions. The common design pattern to mitigate SPOF is to have a backup Master running as a Replicated State Machine. Typically, the backup Master is pre-selected, and the system does not dynamically elect a replavement Master. As such, the network partitions where the Master is in the minority cluster will lead to data unavailability. To summarize, w.r.t. the CAP theorem, the basic design of Master-based makes them more suitable for CP rather than AP.   

One of the other key challenges in Master-based is also SPOS (Single Point of Saturation). If all the requests need to be re-directed through the Master, it will become the performance bottleneck, even though the other servers within the cluster may be under utilized. One of the most common design pattern to mitigate SPOS in a Master-based system is to distribute the global metadata state among the Master and the nodes such that the Master only maintains the mapping of objects/files to the corresponding nodes, which in turn track the physical location of the disks. This allows clients to cache this information, and minimizes traffic to the Master server for normal IO operations.

We will continue the discussion on Master-based systems in the next blog investigating into the topics of space management, data replication, locking, bootstrapping and persistence of metadata



No comments:

Post a Comment