7. Key-Value Databases: In Depth

fabiofumarola1 14,203 views 75 slides Apr 14, 2015
Slide 1
Slide 1 of 75
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75

About This Presentation

In this lecture we analyze key-values databases. At first we introduce key-value characteristics, advantages and disadvantages.
Then we analyze the major Key-Value data stores and finally we discuss about Dynamo DB.
In particular we consider how Dynamo DB: How is implemented
1. Motivation Backgroun...


Slide Content

Key-Value Databases
In Depth
Ciao
ciao
Vai a fare
ciao ciao
Dr. Fabio Fumarola

Outline
•Key-values introduction
•Major Key-Value Databases
•Dynamo DB: How is implemented
–Background
–Partitioning: Consistent Hashing
–High Availability for writes: Vector Clocks
–Handling temporary failures: Sloppy Quorum
–Recovering from failures: Merkle Trees
–Membership and failure detection: Gossip Protocol
2

Key-Value Databases
•A key-value store is a simple hash table
•Where all the accesses to the database are via
primary keys.
•A client can either:
–Get the value for a key
–Put a value for a key
–Delete a key from the data store.
3

Key-value store: characteristics
•Key-value data access enable high performance and
availability.
•Both keys and values can be complex compound
objects and sometime lists, maps or other data
structures.
•Consistency is applicable only for operations on a
single key (eventually-consistency).
4

Key-Values: Cons
•No complex query filters
•All joins must be done in code
•No foreign key constraints
•No trigger
5

Key-Values: Pros
•Efficient queries (very predictable performance).
•Easy to distribute across a cluster.
•Service-orientation disallows foreign key constraints
and forces joins to be done in code anyway.
•Using a relational DB + Cache forces into a key-value
storage anyway
•No object-relational miss-match
6

Popular Key-Value Stores
•Riak Basho
•Redis – Data Structure server
•Memcached DB
•Berkeley DB – Oracle
•Aerospike – fast key-value for SSD disks
•LevelDB – Google key-value store
•DynamoDB – Amazon key-value store
•VoltDB – Open Source Amazon replica
7

Memcached DB
•Atomic operations set/get/delete.
•O(1) to set/get/delete.
•Consistent hashing.
•In memory caching, no persistence.
•LRU eviction policy.
•No iterators.
8

Aerospike
•Key-Value database optimized for hybrid (DRAM + Flash)
approach
•First published in the Proceedings of VLDB (Very Large
Databases) in 2011, “Citrusleaf: A Real-Time NoSQL DB which
Preserves ACID”
9

Redis
•Written C++ with BSD License
•It is an advanced key-value store.
•Keys can contain strings, hashes, lists, sets, sorted sets,
bitmaps and hyperloglogs.
•It works with an in-memory.
•data can be persisted either by dumping the dataset to disk
every once in a while, or by appending each command to a
log.
•Created by Salvatore Sanfilippo (Pivotal)
10

Riak
•Distributed Database written in: Erlang & C, some JavaScript
•Operations
–GET /buckets/BUCKET/keys/KEY
–PUT|POST /buckets/BUCKET/keys/KEY
–DELETE /buckets/BUCKET/keys/KEY
•Integrated with Solr and MapReduce
•Data Types: basic, Sets and Maps
11
curl -XPUT 'http://localhost:8098/riak/food/favorite' \
-H 'Content-Type:text/plain' \
-d 'pizza'

LevelDB
LevelDB is a fast key-value storage library written at Google that
provides an ordered mapping from string keys to string values.
•Keys and values are arbitrary byte arrays.
•Data is stored sorted by key.
•The basic operations are Put(key ,value), Get(key), Delete(key).
•Multiple changes can be made in one atomic batch.
Limitation
•There is no client-server support built in to the library.
12

DynamoDB
•Peer-To-Peer key-value database.
•Service Level Agreement at 99.9% percentile.
•Highly available scarifying consistency
•Can handle online node adds and node failures
•It supports object versioning and application-assisted
conflict resolution (Eventually-Consistent Data
Structures)
13

Dynamo
Amazon’s Highly Available Key-value Store
14

Amazon Dynamo DB
•We analyze the design and the implementation of
Dynamo.
•Amazon runs a world-wide e-commerce platform
•It serves 10 millions customers
•At peak times it uses 10000 servers located in many
data centers around the worlds.
•The have requirements of performance, reliability
and efficiency that needs a fully scalable platform.
15

Motivation of Dynamo
•There are many Amazon services that only need
primary-key access to a data store
–To provide best-seller lists
–Shopping carts
–Customer preferences
–Session management
–Sales rank and product catalogs
•Using relations database would lead to inefficiencies
and limit scale availability
16

Background
17

Scalability is application dependent
•Lesson 1: the reliability and scalability of a system is
dependent on how it s application state is managed.
•Amazon uses a highly decentralized, loosely couples
service oriented architecture composed of hundred
of services.
•They need that the storage is always available.
18

Shopping carts always
•Customers should be able to view and add items to
their shopping carts even if:
–Disk are failing, or
–A data center are being destroyed by a tornados or a
kraken.
19

Failures Happens
•When you deal with an infrastructure composed by
million of component servers and network
components crashes.
20
http://letitcrash.com/

High Availability by contract
•Service Level Agreement (SLA) is the guarantee that
an application can deliver its functionality in a
bounded time.
•An example of SLA is to guarantee that the Acme API
provide a response within 300ms for 99.9% of its
requests for a peak of 500 concurrent users (CCU).
•Normally SLA is described using average, median and
expected variance.
21

Dynamo DB
It uses a synthesis of well known techniques to achieve
scalability and availability.
1.Data is partitioned and replicated using consistent hashing
[Karger et al. 1997].
2.Consistency if facilitated by version clock and object versioning
[Lamport 1978]
3.Consistency among replicas is maintained by a decentralized
replica synchronization protocol (E-CRDT).
4.Gossip protocol is used for membership and failure detection.
22

System Interface
•Dynamo stores objects associated with a key through
two operations: get() and put()
–The get(key) locates the object replicas associated with
the key in the storage and returns a single object or a list
of objects with conflicting versions along with a context.
–The put(key, context, object) operation determines where
the replicas of the object should be placed based on the
associated key, and writes the replicas to disk.
–The context encodes system metadata about the object
23

Key and Value encoding
•Dynamo treats both the key and the object supplied
by the caller as an opaque array of bytes.
•It applies a MD5 hash on the key to generate a 128-
bit identifier, which is used to determine the storage
nodes that are responsible for serving the key.
24

Dynamo Architectural Choice 1/2
We focus on the core of distributed systems techniques used
25
Problem Technique Advantage
Partitioning Consistent Hashing
Incremental Scalability

High Availability for writes Vector clocks with
reconciliation during reads
Version size is decoupled
from update rates.
Handling temporary
failures
Sloppy Quorum and hinted
handoff
Provides high availability
and durability guarantee
when some of the replicas
are not available

Dynamo Architectural Choice 2/2
We focus on the core of distributed systems techniques used
26
Problem Technique Advantage
Recovering from
permanent failures
Anti-entropy using Merkle
trees
Synchronizes divergent
replicas in the background.
Membership and failure
detection
Gossip-based membership
protocol and failure
detection
Preserves symmetry and
avoids having a centralized
registry for storing
membership and node
liveness information.

Partitioning: Consistent Hashing
•Dynamo musts scale incrementally.
•This requires a mechanism to dynamically partition
the data over the set of nodes (i.e., storage hosts) in
the system.
•Dynamo’s partitioning scheme relies on consistent
hashing to distribute the load across multiple storage
hosts.
•the output range of a hash function is treated as a
fixed circular space or ring
27

Partitioning: Consistent Hashing
•Each node in the system is assigned a random value
within this space which represents its “position” on
the ring.
•Each data item is assigned to a node by:
1.hashing the data item’s key to yield its position on the
ring,
2.and then walking the ring clockwise to find the first node
with a position larger than the item’s position.
28

Partitioning: Consistent Hashing
•each node becomes
responsible for the region in
the ring between it and its
predecessor node on the ring
•The principle advantage of
consistent hashing is that
departure or arrival of a node
only affects its immediate
neighbors and other nodes
remain unaffected.
29

Consistent Hashing: Idea
•Consistent hashing is a technique that lets you
smoothly handle these problems:
1.Given a resource key and a list of servers, how do you
find a primary, second, tertiary (and on down the line)
server for the resource?
2.If you have different size servers, how do you assign each
of them an amount of work that corresponds to their
capacity?
30

Consistent Hashing: Idea
•Consistent hashing is a technique that lets you
smoothly handle these problems:
3.How do you smoothly add capacity to the system without
downtime?
4.Specifically, this means solving two problems:
•How do you avoid dumping 1/N of the total load on a new server
as soon as you turn it on?
•How do you avoid rehashing more existing keys than necessary?
31

Consistent Hashing: How To
•Imagine a 128-bit space.
•visualize it as a ring, or a
clock face
•Now imagine hashing
resources into points on
the circle
32

Consistent Hashing: How To
•They could be URLs, GUIDs,
integer IDs, or any arbitrary
sequence of bytes.
•Just run them through a good
hash function (eg, MD5) and
shave off everything but 16
bytes.
•We have four key-values: 1, 2,
3, 4.
33

Consistent Hashing: How To
•Finally, imagine our servers.
–A,
–B, and
–C
•We put our servers in the same
ring.
•We solved the problem of
which server should user
Resource 2
34

Consistent Hashing: How To
•We start where resource 2 is
and, head clockwise on the
ring until we hit a server.
•If that server is down, we go
to the next one, and so on
and so forth
35

Consistent Hashing: How To
•Key-value 4 and 1 belong to
the server A
•Key-value 2 to the server B
•Key-value 3 to the server C
36

Consistent Hashing: Del Server
•If the server C is removed
•Key-value 3 now belongs to
the server A
•All the other key-values
mapping are unchanged
37

Consistent Hashing: Add Server
•If server D is added in the
position marked
•What are the object that will
belongs to D?
38

Consistent Hashing: Cons
•This works well, except the size of the intervals
assigned to each cache is pretty hit and miss.
• Since it is essentially random it is possible to have a
very non-uniform distribution of objects between
caches.
•To address this issue it is introduced the idea of
"virtual nodes”
39

Consistent Hashing: Virtual Nodes
•Instead of mapping a server to a single point in the
circle, each server gets assigned to multiple points in
the ring.
•A virtual node looks like a single node in the system,
but each node can be responsible for more than one
virtual node.
•Effectively, when a new node is added to the system,
it is assigned multiple positions in the ring.
40

Virtual Nodes: Advantages
•If a node becomes unavailable (due to failures or routine
maintenance), the load handled by this node is evenly
dispersed across the remaining available nodes.
•When a node becomes available again, or a new node is
added to the system, the newly available node accepts a
roughly equivalent amount of load from each of the other
available nodes.
•The number of virtual nodes that a node is responsible can
decided based on its capacity, accounting for heterogeneity in
the physical infrastructure.
41

Data Replication
•To achieve high availability and durability, Dynamo
replicates its data on multiple hosts.
•Each data item is replicated at N hosts, where N is a
parameter configured “per-instance”.
•Each key k is assigned to a coordinator node
(described above).
•The coordinator is in charge of the replication of the
data items that fall within its range (ring).
42

Data Replication
•The coordinator locally
store each key within its
range,
•And in addition, it replicates
these keys at the N-1
clockwise successor nodes
in the ring.
43

Data Replication
•The list of nodes that is responsible for storing a particular key
is called the preference list
•The system is designed so that every node in the system can
determine which nodes should be in this list for any particular
key.
•To account for node failures, preference list contains more
than N nodes.
•To avoid that with “virtual nodes” a key k is owned by less
than N physical nodes, the preference list skips some nodes.
44

High Availability for writes
•With eventual consistency writes are propagated
asynchronously.
•A put() may return to its caller before the update has
been applied at all the replicas.
•In this scenarios where a subsequent get() operation
may return an object that does not have the latest
updates.
45

High Availability for writes: Example
•We can see this event with shopping carts.
•The “Add to Cart” operation can never be forgotten
or rejected.
•When a customer wants to add an item to (or
remove from) a shopping cart and the latest version
is not available, the item is added to (or removed
from) the older version and the divergent versions
are reconciled later.
•Question!
46

High Availability for writes
•Dynamo treats the result of each modification as a
new and immutable version of the data.
•It allows for multiple versions of an object to be
present in the system at the same time.
•Most of the time, new versions subsume the
previous version(s), and the system itself can
determine the authoritative version (syntactic
reconciliation).
47

Singly-Linked List
START
48

Singly-Linked List
49 D
3r 5 . 7 Fab
Consiou
Nil
abstract sealed class List {
def head: Int
def tail: List
def isEmpty: Boolean
}
case object Nil extends List {
def head: Int = fail("Empty list.")
def tail: List = fail("Empty list.")
def isEmpty: Boolean = true
}
case class Cons(head: Int, tail: List = Nil) extends List {
def isEmpty: Boolean = false
}

List: analysis
50D
3r 5 . 7 A =
B = Cons(9, A) = m 9
C = Cons(1, Cons(8, B)) = l 1 • 8 b–c-X–-cPuUbTPcoa'
structural sharing

/**
* Time - O(1)
* Space - O(1)
*/
def prepend(x: Int): List = Cons(x, this)
/**
* Time - O(n)
* Space - O(n)
*/
def append(x: Int): List =
if (isEmpty) Cons(x)
else Cons(head, tail.append(x))
List: append & prepend
51D
3r 5 . 7 m 9 D
3r 5 . 7 m 9

List: apply
52D
3r 5 . 7 h 4t 2 p 6
n - 1
/**
* Time - O(n)
* Space - O(n)
*/
def apply(n: Int): A =
if (isEmpty) fail("Index out of bounds.")
else if (n == 0) head
else tail(n - 1) // or tail.apply(n - 1)

List: concat
53 /P–TUXF/soa'
path copying
A = h
4t 2 p 6
B = D
3r 5 . 7
C = A.concat(B) = h 4t 2p 6
/**
* Time - O(n)
* Space - O(n)
*/
def concat(xs: List): List =
if (isEmpty) xs
else tail.concat(xs).prepend(head)

List: reverse (two approaches)
54h
4t 2 p 6 h 4p 6 t 2 reverse( ) =
def reverse: List =
if (isEmpty) Nil
else tail.reverse.append(head)
, or tail recursion in O(n)
The straightforward solution in O(n
2
)
def reverse: List = {
@tailrec
def loop(s: List, d: List): List =
if (s.isEmpty) d
else loop(s.tail, d.prepend(s.head))

loop(this, Nil)
}

List performance
55

Singly-Linked List
END
56

High Availability for writes
•Failure in nodes can potentially result in the system
having not just two but several versions of the same
data.
•Updates in the presence of network partitions and
node failures can potentially result in an object
having distinct version sub-histories.
57

High Availability for writes
•Dynamo uses vector clocks in order to capture
causality between different versions of the same
object.
•One vector clock is associated with every version of
every object
•We can determine whether two versions of an object
are on parallel branches or have a causal ordering, by
examine their vector clocks
58

High Availability for writes
•When dealing with different copy of the same object:
–If the counters on the first object’s clock are less-than-or-
equal to all of the nodes in the second clock, then the first
is an ancestor of the second and can be forgotten.
–Otherwise, the two changes are considered to be in
conflict and require reconciliation.
59

HA with Vectors Clock
•Vector Clock is an algorithm for generating a partial
ordering of events in a distributed system and
detecting causality violations.
•They are based on logical timestamp, otherwise
known as a Lamport Clock.
•A Lamport Clock is a single integer value that is
passed around the cluster with every message sent
between nodes.
60

HA with Vectors Clock
•Events in the blue region are the causes leading to event B4,
whereas those in the red region are the effects of event B4
61

HA with Vectors Clock
•Each node keeps a record of what it thinks the latest (i.e.
highest) Lamport Clock value is, and if it hears a larger value
from some other node, it updates its own value.
•Every time a database record is produced, the producing
node can attach the current Lamport Clock value + 1 to it as a
timestamp.
•This sets up a total ordering on all records with the valuable
property that if record A may causally precede record B, then
A's timestamp < B's timestamp.
62

Example Vector Clock: Dynamo
63

Execution of get() and put()
•Each read and write is in charge of a coordinator.
•Typically, this is the first among the top N nodes in
the preference list
•Read and write operations involve the first N healthy
nodes in the preference list, skipping over those that
are down or inaccessible.
64

Handling temporary failures
•To handle this kind of failures Dynamo uses a “sloppy
quorum”.
•When there is a failure, a write is persisted on the
next available nodes in the preference list.
•The replica sent to D will have a hint in its metadata
that suggests which node was the intended recipient
of the replica (in this case A).
65

Handling temporary failures
•Nodes that receive hinted replicas will keep them in
a separate local database that is scanned
periodically.
•Upon detecting that A has recovered, D will attempt
to deliver the replica to A.
•Once the transfer succeeds, D may delete the object
from its local store without decreasing the total
number of replicas in the system.
66

Recovering from permanent failures
•It is a scenarios when the hinted replica become
unavailable before they can be returned to the
original replica node.
• To handle this and other threats to durability,
Dynamo implements an anti-entropy protocol to
keep the replicas synchronized.
67

Recovering from permanent failures
•To detect the inconsistencies between replicas faster
and to minimize the amount of transferred data,
Dynamo uses Merkle trees [Merkle 1988]
•A Merkle tree is a hash tree where:
–leaves are hashes of the values of individual keys.
–Parent nodes higher in the tree are hashes of their respective
children.
•The principal advantage of Merkle tree is that each
branch of the tree can be checked independently
without requiring nodes to download the entire tree.
68

Membership and failure detection
•It depends on total failures of nodes or manual
errors.
•In such cases, An administrator uses a command line
tool or a browser
–to connect to a Dynamo node and issue a membership
change
–to join a node to a ring or
–remove a node from a ring.
69

Implementation
•In Dynamo, each storage node has three main
software components:
1.request coordination,
2.membership and failure detection,
3.and a local persistence engine.
•All these components are implemented in Java.
70

Backend Storage
•Dynamo’s local persistence component allows for
different storage engines to be plugged in.
•Engines that are in use
1.are Berkeley Database (BDB) Transactional Data Store2,
2.Berkeley Database Java Edition,
3.MySQL,
4.and an in-memory buffer with persistent backing store.
71

Conclusions
72

Dynamo Main Contributions
1.It demonstrates how different techniques can be
combined to provide a single highly-available
system.
2.It demonstrates that an eventually consistent
storage system can be used in production with
demanding applications
3.It provides insight into the tuning of these
techniques.
73

References
1.http://diyhpl.us/~bryan/papers2/distributed/distributed-
systems/consistent-hashing.1996.pdf
2.http://www.ist-selfman.org/wiki/images/9/9f/2006-schuett-gp2pc.pdf
3.http://www.tomkleinpeter.com/2008/03/17/programmers-toolbox-part-
3-consistent-hashing/
4.http://www.tom-e-white.com/2007/11/consistent-hashing.html
5.http://michaelnielsen.org/blog/consistent-hashing/
6.http://research.microsoft.com/pubs/66979/tr-2003-60.pdf
7.http://www.quora.com/Why-use-Vector-Clocks-in-a-distributed-
database
74

References
8.http://basho.com/why-vector-clocks-are-easy/
9.http://en.wikipedia.org/wiki/Vector_clock
10.http://basho.com/why-vector-clocks-are-hard/
11.http://www.datastax.com/dev/blog/why-cassandra-doesnt-need-vector-
clocks
12.https://github.com/patriknw/akka-data-replication
75