Home » Posts tagged 'Key-Value'

Tag Archives: Key-Value

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 210 other subscribers
December 2024
S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031  

NVMCached: An NVM-based Key-Value Cache

NVMCached: An NVM-based Key-Value Cache
Xingbo Wu, Fan Ni, Li Zhang, Yandong Wang, Yufei Ren, Michel Hack, Zili Shao, and Song Jiang, in Proceedings of the 7th ACM SIGOPS Asia-Pacific Workshop on Systems, 2016.

This is a short (6 page) paper that discusses the challenges of taking a memory based key-value cache (e.g., memcached) and converting it to use a non-volatile memory.  Because the authors do not have NVM available, they focus on simulating it using DRAM.  They simulate behaviors by using appropriate cache flushes and fence.  In fact, they use CLFLUSH and MFENCE, which may be a bit more than is required.

Here is an interesting thing for people familiar with Intel processor architecture for more than about the past 15 years: the CPUs no longer provide strong ordering guarantees.  If you’ve learned or refreshed your knowledge about the memory consistency models more recently, then you are likely already familiar with this.  One of the interesting issues with NVM is that we need to think about memory consistency in a way that’s never been required previously.  The authors of this paper do this.

They note that flushes are expensive; they argue for using “consistency-friendly data structures” for minimizing the number of flushes required, which certainly makes sense, particularly given that they are expensive.  Some operations (like anything with the LOCK prefix) are performed atomically as well and are conceptually similar; both tie into the processor’s cache architecture (for those that have been around a while, the cache line semantics for LOCK were introduced around 1995.)

One of the important observations in this paper is that the reason to use NVM is that it allows fast restart after a system failure, such as a crash or power event.  For memcached, there are users that are asking for persistence.  Even if the service is only a cache, it turns out that offloading to the cache can be a powerful mechanism for improving overall throughput, but the larger the cache becomes, the higher the cost when the cache gets tossed after a system crash.

However, this also simplifies the problem at hand: discarding data in this type of system is of minimal concern.   They utilize checksums to detect damaged data structures, which alleviates their need to perform ordered write operations; they reorganize their data structures to permit changes with only infrequent fencing.  They point out the problem of zombie data (data that is deleted, then the system crashes, so the data returns after the system reboots).  To prevent this situation they choose to split their data structures.

The checksums prove to be much less expensive than flush; the authors claim up to 20x faster (my own reading indicates that using the optimized CRC32 built into modern Intel CPUs the cost is just over 1 clock cycle per byte checksummed, so a 64 byte cache line would have a cost of approximately 70 clock cycles; on a 2.0GHz processor, that would be roughly 17.5 nanoseconds, which certainly is somewhere between 5 and 20 times faster than writing back a cache line.

The authors also touch on the challenges of memory allocation.  Memory fragmentation is something we often fix in dynamic memory by rebooting the machine.  However, now we have the storage level problem with fragmentation; fixing this involves write amplification.  They organize their allocation strategy by observing there is considerable locality across the KV store.  They combine a log structured allocation scheme with a “zone based” allocation model and attempt to combine cleaning, eviction, and updates to minimize the impact on performance.   They also use DRAM to track access information as part of their LRU implementation.  This creates a burden on crash recovery, but benefits from significant improved runtime performance.

They also implement a DRAM write combining cache; the idea is that when an item is detected to be “hot” the item is cached in DRAM.  Those hot items will thus be discarded if the system crashes.

Their evaluation is against a modified version of memcached.  They utilize four Facebook traces; interestingly they do not use one (VAR) because “it is write-dominant with only a few distinct keys touched.”  I am not certain that I see why this makes it unsuitable for use, though I suspect they do not obtain favorable behavior in the face of it.

They evaluate the performance of native memcached versus their NVM-enhanced version with a series of micro-benchmarks after a restart event.  They show that in several cases it takes more than 1 billion requests before the performance of the DRAM memcached to converge with the NVM enhanced version.  When they use the “real world” Facebook memached traces, they observe that three of the benchmarks show comparable performance.  Two of them, SYS and VAR, show substantially better performance.  Both benchmarks are write intensive.  I admit, this surprised me, since I am not certain why writing to (simulated) NVM should be substantially after than writing to DRAM.

Overall, it is interesting work and touches on important challenges for working with NVM as well as improving system performance.

SILT: A Memory-Efficient, High-Performance Key-Value Store

SILT: A Memory-Efficient, High-Performance Key-Value Store
Hyeontaek Lim, Bin Fan, David G. Andersen, Michael Kaminsky, in Proceedings of the 23rd Symposium on Operating Systems Principles (SOSP ’11), October 23-26, 2011, Cascais, Portugal.

In this paper, we have an interesting hybrid key-value store; part of the store is in DRAM, while the rest of it is stored in flash storage.  The authors have focused on optimizing the performance of the store across the specific media chosen, which leads to some interesting choices and observations.

SILT (Small Index Large Table) is a memory-efficient, highperformance
key-value store system based on flash storage that
scales to serve billions of key-value items on a single node. It requires
only 0.7 bytes of DRAM per entry and retrieves key/value
pairs using on average 1.01 flash reads each. SILT combines new
algorithmic and systems techniques to balance the use of memory,
storage, and computation. Our contributions include: (1) the design
of three basic key-value stores each with a different emphasis on
memory-efficiency and write-friendliness; (2) synthesis of the basic
key-value stores to build a SILT key-value store system; and (3) an
analytical model for tuning system parameters carefully to meet the
needs of different workloads. SILT requires one to two orders of
magnitude less memory to provide comparable throughput to current
high-performance key-value systems on a commodity desktop
system with flash storage.

Thus, to achieve optimal performance they construct a hybrid ensemble of key-value stores that are optimized to particular underlying media.

They start early with their performance observations, and continue this pace throughout the paper.  Thus, they point out that they have a small memory requirement (0.7 bytes per key value) and only a single read from flash required:

This fits well with their observation that memory efficiency is critically important for scalable key-value stores.  Their system saturated the capabilities of their hardware with 46,000 lookups per second, though this doesn’t guarantee that it will do so on newer, faster hardware (and we will get to several current KV stores.

 

Figure 2 (from the paper) provides a basic description of the SILT architecture.  From this, we can see the three distinct types of stores that are used within their implementation:

  • LogStore – uses a CPU-efficient cuckoo-hash based only upon a tag derived from the full key (20 bytes).  The benefit of this approach is that it avoids accessing the flash memory if the key is not present.  It is probabilistic, in that a key might not be present in Flash with some low probability.
  • HashStore – uses a memory-efficient storage model for their cuckoo flash table.  These stores are immutable and are stored in flash.
  • SortedStore – They use an entropy-coded trie, which provides a very space efficient indexing mechanism: the average according to the authors is 0.4 bytes per key.  Given that keys are 160 bits (20 bytes), which means the index is 64 bits (8 bytes).   The authors provide us with an impressive list of specific optimizations they have applied to sorting data, packing indexes, intelligent (non-pointer) based addressing,

They also provide us with pretty figures to graphically lay out their data structures.   First, Figure 3, which shows the in-memory cuckoo hash table:

Note the fact the key-value insertions are logged to the flash storage.

When we move onto HashStore, they show us how the structure of the data is moved from insertion order to hash sorted order as the data is moved from memory to flash.

Recall that once written to Flash, the HashStore is immutable – so this order will remain fixed.

Of course, one of the important aspects of a key-value store is the ability to look up the data quickly.  Thus, they also show us how they sort data using a prefix tree (Trie).  This is a key aspect of their SortedStore:

The in-memory Trie, permits them to rapidly find the corresponding data in Flash.  This is one of their key motivations for very efficient encoding of the Trie: it ensures they can store it in memory.  Once they can no longer store the indices in memory the lookup system will become I/O bound (to the Flash, in this case).

Figure 6 then shows how they actually encode the length of the tree in each direction, and how it is entropy encoded:There is some serious thought that has gone into their layout.  They do admit, however, that they are not providing crash tolerance: there will be data that was stored in DRAM that cannot be recovered after a crash.  They do not discuss the use of NVM for achieving this.  Instead, they suggest a synchronous write mechanism in which the write blocks until the log data has been written to the flash-based log.

One of the interesting aspects of their performance evaluation is the recognition that they have quite a few parameters that must be chosen.  Thus, they spend more than a page of the paper just discussing the relative merits of the trade-offs involved in selecting various parameters.  They consider write amplification (the need to write more than just the data being preserved), read amplification (performing reads only to discover the data that you seek is not present), memory overhead (how much DRAM is used), and update rate versus flash life time.  This latter point is an interesting one to keep in mind when reading about non-volatile memory: it wears out.  There is a certain amount of degradation that occurs in the underlying physical medium each time it is erased and re-used.  The specific details are dependent upon the characteristics of the medium, but it is an issue to keep in mind.

The evaluation of SILT is certainly interesting.  They choose not to compare to any other KV system and instead focus on the performance of their system using a variety of fairly standard loads (YCSB) as well as simple get/put key operations as micro-benchmarks.    They end up claiming they they can lookup almost 45k keys per second in the slowest of their hybrid stores.  When combined, they indicate they can achieve almost 58k get operations per second on a single core.  As they move to multiple cores they see some further improvement (180k get operations per second for data that is already in memory).

SILT offers some interesting observations, particularly about highly efficient use of memory and novel indexing schemes.

Cache Craftiness for Fast Multicore Key-Value Storage

Cache Craftiness for Fast Multicore Key-Value Storage
Yandong Mao, Eddie Kohler, Robert Morris, in Proceedings of the 7th ACM european conference on Computer Systems (Eurosys ’12), pp. 183-196, Bern, Switzerland, April 10 – 13, 2012.

In this work, the authors build a key-value system called MassTree.  This is an in-memory Key-Value store (I will look at several of them, in fact).  What is novel about this particular variant is the authors focus on providing a high-performance parallel access implementation and the mechanisms by which they achieve this.

MassTree support arbitrary-length keys; it assumes nothing about the keys (so they may be binary data).  It uses B+-trees for storage organized into a Trie structure.  The B+-tree stores a fixed slice of the key; the Trie connects the various slices together.  This use is interesting, since most of the cases I’ve seen of Tries are for strings, as they do an excellent job of managing overlapping string data (prefix trees).  The authors use this for binary keys.  There is nothing inherently more difficult about binary versus string keys (since they are equivalent) but this choice makes the solution very flexible, as it is not particularly data dependent.  This basic layout is shown in Figure 1 (from the paper).

One of the challenges with concurrent data structures is how to handle the common case – no collisions – with minimal performance overhead.  The classic mutual exclusion model (or reader/writer locks) involves a certain amount of overhead, even for non-contended locks, because of the need to perform interlocked operations against shared (common) memory where the locks are maintained.  The system implemented by the authors does not require any locking for readers (lookups). Updates are done with locks local to the CPU, which helps minimize the contention of typical locks.

One interesting observation is that their choice of this hybrid Trie/B+-tree structure was motivated by prior systems that struggled with performance in the presence of variable length keys.  In MassTree, the rate limiting factor for queries is the cost of walking the tree.  They minimize this by using “a wide fan-out tree to reduce tree-depth, prefetches nodes from DRAM to overlap fetch latencies, and carefully lays out data in cache lines to reduce the amount of data needed per node.”

Hence, my interest in this paper: these all seem to be important lessons for persistent memory as well, where the latencies are somewhat larger than for DRAM. Further, the authors are concerned about correctness.  They have not looked at persistence (and recoverable consistency), so there is still further work for me to do should I investigate it further.

The authors conclude by claiming the following contributions:

First, an in-memory concurrent tree that supports keys with shared prefixes efficiently.  Second, a set of techniques for laying out the data of each tree node, and accessing it, that reduces the time spent waiting for DRAM while descending the tree.  Third, a demonstration that a single tree shared among multiple cores can provide higher performance than a partitioned design for some workloads.  Fourth, a complete design that addresses all bottlenecks in the way of million-query-per-second performance.

Their emphasis on correctness and cache efficiency is certainly an attractive aspect of this work.

The driving considerations for their design were: efficient support of many different key distributions including binary and variable length keys, with common prefixes; fine grained concurrent access for high performance parallel access, and cache efficiency through optimal data placement and prefetching.

The authors establish the characteristics of their tree structure, including data placement.  These make reasoning about their tree characteristics simpler.  For example, they note that MassTree does not guarantee that it is a balanced structure.  However, due to the way the tree itself is structured, they have the same algorithmic cost: O(l log n) comparisions, where l is the length of the key and n is the depth of the tree.

As a pragmatic check on their implementation, they also use a partial-key B-tree (pkB-Tree) for comparison.  Despite the fact the pkB-Tree is balanced, the authors note that MassTree performs favorably well on several benchmarks.  The authors go into detail about their implementation details, including the construction of border nodes and interior nodes, as well as how they lay out data (again, with an eye towards cache line efficiency).

To achieve this efficiently, they use a versioning scheme.  A node has a version number.  To modify the node, a given processor must update the version to indicate that it is changing the node.  A reader will snapshot the version at the start of the read, and compare it at the end of the read.  If it changed, the reader knows the state may have changed and can retry the read operation (essentially a variant of software transactional memory).  The detailed diagram of this is shown in Figure 3 (from the paper).

The paper describes the concurrency model in the face of conflicting writers as well.  By keeping their lock in the same cache line as their data, they exploit the cache coherence protocol.  The authors note that lock-free operations have comparable cache behavior (e.g., compare-and-swap or link-load-store-conditional).

Indeed, much of the rest of the technical content of the paper is explaining why their approach is correct – an essential point for concurrent access systems.  Without that, there really is not much point!

While brief, their discussion about value storage is interesting: their measurements are done assuming that values will be small.  They state they have a scheme for managing large values as well, via a separate allocator.  While they do not claim this, my observation is that a “real world” system would likely need to have some hybrid form of this.

MassTree is an in-memory key-value store. To provide persistence, they rely upon a write-behind log scheme.  When used via the network interface, the writes are not guaranteed.  To minimize the loss window, they choose a 200 ms timer.  Thus, the log is written to disk every 200 ms.  They do not evaluate this persistence model, offering it to us as an explanation that persistence is not incompatible with performance.

This last point is an interesting one, especially when considered in the context of NVM: what are the trade-offs.  The authors hint at this, but do not explore this space.

For those interested, a version of the source code can be found here: https://github.com/kohler/masstree-beta