Home » 2018 (Page 3)

Yearly Archives: 2018

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 210 other subscribers
December 2024
S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031  

Polyvalues: A Tool for Implementing Atomic Updates to Distributed Data

Polyvalues: A Tool for Implementing Atomic Updates to Distributed Data
Warren A. Montgomery, in Proceedings of the seventh ACM symposium on Operating systems principles, pp. 143-149. ACM, 1979.

I found this paper to be surprisingly interesting despite the fact it may be one of the least cited SOSP papers I’ve ever seen (ACM lists one citation to it, and Google Scholar lists two.)

The solution presented is based on the notion of maintaining several potential current values (a polyvalue) for each database item whose exact value is not known, due to failures interrupting atomic updates.  A polyvalue represents the possible set of values that an item could have, depending on the outcome of transactions that have been delayed by failures.  Transactions may operate on polyvalues, and in many cases a polyvalue may provide sufficient information to allow the results of a transaction to be computed, even though the polyvalue does not specify an exact value.  An analysis and simulation of the polyvalue mechanism shows that the mechanism is suitable for databases with reasonable failure rates and recovery times. The polyvalue mechanism is most useful where prompt processing is essential, but the results that must be produced promptly depend only loosely on the database state.  Many applications, such as electronic funds transfer, reservations, and process control, have these characteristics.

To me, this seems like a useful insight: sometimes, the correct outcome of a transactions does not depend upon the specific value of some object.  For example, if a transaction is checking to see if there are sufficient seats to sell for an airline, the fact that the range of possible seat counts is 37, 39, 40, or 41 doesn’t impact the ability of the system to sell one more seat.  There is no hard requirement that we must have an exact value.

In its own way, this is an intriguing manifestation of eventual consistency.  Eventually, the system will be able to figure out the correct number of seats available, once the unknown outcomes have been computed.  Today, we understand consistency models well because relaxing consistency in distributed systems helps improve performance.

The traditional, lock-based system approach (such as we discussed in Implementing Atomic Actions on Decentralized Data) provides strong consistency guarantees.  This was in keeping with the original requirements that transactions lead to a consistent set of state changes.  But transactions are there to ensure we move from one consistent state to another consistent state.  This concept of being able to proceed even in the face of some level of uncertainty points out that we just need to end up in a consistent state, not the consistent state.  We trade off strict determinism for performance and flexibility.

“[T]he failure of a site should not indefinitely delay any transaction that does not access data stored at that site.”  This likely seems blindingly obvious, yet in my own experience with distributed systems achieving this is harder than one might think.  Leslie Lamport is credited with defining a distributed system: “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.

Polyvalues work by maintaining a vector of possible outcome values.  If the existing possible outcome values are all consistent with allowing a new transaction to proceed, it seems reasonable to permit the new transaction to proceed, versus having it block and wait for a single definitive outcome.  After all, regardless of the outcome this transaction can proceed.

The author defines a polyvalue: “a set of pairs <v,c> where v is a simple value and c is a condition which is a predicate.”  This introduces the idea of a logical operation that determines the outcome, rather than just a simple record of the data value, and the value of an object as being a range of possible values that have not yet been determined.  “A polyvalue is assigned to an item if a failure delays a transaction that is updating that item, or a polyvalue may be produced as one of the results of a transaction that accesses an item that has a polyvalue.”

The author then goes on to explain the logic of polyvalues, and how their inclusion into a transaction converts it to a polytransaction.  The implementation here is one in which multiple possible outcomes are described.  This approach would certainly seem to limit the use of this technique as otherwise there could be a state space explosion.  He describes a mechanism of collapsing these states – the precise number of seats on the plane is a polyvalue, but the decision to sell the ticket for one seat need not be blocked at that point since all the polyvalues lead to the same outcome.

A polytransaction that has possible paths which fail will have to block and pend if the outcome is dependent upon the values of the polyvalues, but if all possible polyvalues yield the same result, the polytransaction can be sold.

The insight here is that in highly distributed databases most transactions can achieve a valid outcome regardless of the intermediate state values.  If you look at their example of the bank account withdrawal model, it is clear that this makes sense.  The operation of withdrawing funds from your account can complete in any order as long as none of them lead to a negative balance (they use this example in the paper). Thus, it makes no sense to block one in favor of the other.

To evaluate this model, the author defines various terms:

  • I – the number of items in the database
  • – the number of updates per second
  • F – the failure probability of an update
  • R – the recovery rate (per second) from failed operations
  • D – the dependency count (average) for new values
  • Y – the probability the new value the update does not depend upon the previous value

He then predicts the number of polyvalues that will exist in the database (Table 1 from the paper):

Table 1

Thus, even with somewhat pessimal error and recovery rates, he does not expect more than 51 polyvalues within the database.

Finally, he reports the results of his simulation of the system having 10,000 database entries:

Table 2

Now with 1% failure rates, very slow (1 per 10 second) recovery rates, high dependency rates (D=5) and 10 transactions per second, he still only ends up with 20 polyvalues. Thus, this approach seems to help in scaling without a dramatic increase in complexity.

My take-away: strict consistency is not necessary to construct a viable system. Even allowing for some variance in outcomes it is possible to optimize the performance of the overall system at a nominal increase in potential size and complexity.

Useful insights, indeed.

Implementing Atomic Actions on Decentralized Data

Implementing Atomic Actions on Decentralized Data
David P. Reed, Transactions on Computer Systems, Vol 1. No. 1, February 1983, pp. 3-23.

This certainly must have been an interesting choice to be the first paper of the first ACM Transactions on Computer Systems.  It is certainly an interesting work on concurrent activity within a distributed system.  It relies upon a basic concept of ordering within a distributed system (“decentralized data”).  He builds upon the basics laid down by Leslie Lamport in Time, Clocks, and the Ordering of Events in a Distributed System. While Lamport was concerned about defining the (partial) ordering of events in the distributed system, Reed is concerned about using that ordering to construct useful distributed data updates.

Given that file systems, especially distributed file systems, are concerned with managing data across nodes in a consistent fashion, this work is of particular importance.  By 1983 we have seen the emergence of network file systems, which I plan on describing further in coming posts, but they are still fairly primitive.  Database systems are further along in allowing distributed data and coordination through things like two-phase commit.

He starts by describing the goals of this work:

The research reported here was begun with the intention of discovering methods for combining programmed actions on data at multiple decentralized computers into coherent actions forming a part of a distributed application program.

His focus is on coordinating concurrent actions across distributed data and ensuring that failures are properly handled.  What does it mean to properly handle failures?  Essentially, it means that the data is in a consistent state once the system has recovered from the failure.  He starts by defining terms that relate to consistency models.  For example, he defines an atomic action as being a set of operations that execute in different locations and at different times but cannot be further decomposed.  A single action starts with a consistent state at the start and moves to a consistent state at the end.  Any intermediate state of the system is not visible (what we would call “isolation” now).  He formally defines these concepts as well.

He touches on the idea of consistency, in which one starts with a consistent system and then proves each (atomic) operation yields a consistent state.  In my experience this aspect of distributed systems is sometimes skipped, often due to the complexity of doing the work required here.  In recent years, formal proof methods have been used to automate some aspects of this.  I’m sure I will touch upon it in later posts.

One key benefit of this system of atomic actions is that it makes things simpler for application programmers: in general, they need not deal with unplanned concurrency and failure.  Indeed, that is one of the key contributions of this work: the process of reasoning about failure and how to handle it.  Indeed, in my experience, handling failure gracefully is one of the substantial challenges inherent in constructing distributed systems.  If something can fail, it will.

Achieving atomic action requires the ability to interlock (“synchronization”) against other actors within the system and the ability to gracefully recover from failure cases.  The author goes on to describe what his decentralized system looks like: a message passing model (via the network, presumably,) with nodes containing stable storage and the ability to read and write some finite sized storage unit atomically (“blocks”).

One class of failure the author explicitly disclaims: a situation in which the system performs an operation but ends up with a different but valid outcome.  This makes sense, as it would be difficult to reason in the face of arbitrary changes each time a given operation were requested.  He sets forth a series of objectives for his system:

(1) Maximize node autonomy, while allowing multisite atomic actions
(2) Modular composability of atomic actions.
(3) Support for data-dependent access patterns.
(4) Minimize additional communications.
(5) No critical nodes.
(6) Unilateral aborting of remote requests.

Having laid this groundwork, the author then defines the tools he will use to achieve these objectives.  This includes a time-like ordering of events, version information for objects, and the ability to associate intermediate tentative steps together (“possibilities”).

He envisions a versioned object system, where the states of the object correspond to changes made to the object.

At this point I’ll stop and make an observation: one of the challenges for this type of versioning is that the way in which we view objects can greatly complicate things here.  For example, if we modify an object in place then this sort of versioning makes sense.  However, if we modify an object by creating a new object, writing it, and then replacing the old object with the new object, we have a more complex functional model than might be naively envisioned.  This is not an issue clearly addressed by the current paper as it relates mostly to usage.  But I wanted to point it out because this sort of behavior will make things more difficult.

One of the important contributions in this work is the discussion about failure recovery.  This is, without a doubt, one of the most complex parts of building a distributed system: we must handle partial failures.  That is, one node goes offline, one network switch disappears, one data center loses power.

The author thus observes: “If a failure prevents an atomic action from being completed, any WRITE the atomic action had done to share data should be aborted to satisfy the requirement that no intermediate states of atomic actions are visible outside the atomic action.  Thus, one benefit of the versioned objects is that the pending transaction (“possibilities”) can track the updated version.  Abort simply means that the tentative versions of the objects in the transaction are deleted.  Committing means that the tentative versions of the object in the transaction are promoted to being the latest version.

Thus, we see the basic flow of operations: a transaction is started and a possibility is created.  Each potential change is described by a token.  Tokens are then added to the possibility.  While the term is not here, is appears to be a model for what we refer to as write-ahead logging (sometimes also called intention logging).

Time stamps are introduced in order to provide the partial ordering of events or operations, so that the changes can be reliably reproduced.  The author goes into quite an extensive discussion about how the system generates time stamps in a distributed fashion (via a pre-reservation mechanism).  This approach ensures that the participants need not communicate in order to properly preserve ordering.  The author calls this pseudotime. He continues on to explain how timestamps are generated.

Using his ordered pseudo-time operations, his read and write operations, possibilities, and tokens, he then constructs his distributed data system using these primitives.  There is detail about how it was implemented, challenges in doing so and the benefits of immutable versions.

He admits there are serious issues with the implementation of this system: “For most practical systems, our implementation so far suffers from a serious problem.  Since all versions of an object are stored forever, the total storage used by the system will increase at a rate proportional to the update traffic in the system.  Consequently, we would like to be able to throw away old versions of the objects in the system.  We can do this pruning of versions without much additional mechanism, however.”  His discussion of why this may not be desirable is interesting as it discusses the tension between reclaiming things and slow transactions. He does describe a mechanism for pruning.

After this, the author turns his attention to using atomic transactions to construct new atomic transactions: nested transactions (also “composable transactions” so that we have multiple terms for the same concept!)  Again, this is proposed, but not explored fully.

The breadth and scope of this paper is certainly impressive, as the author has described a mechanism by which distributed transactions can be implemented.  I would note there are no evaluations of the system they constructed, so we don’t know how efficient this was, but the model is an important contribution to distributed storage.

Time, Clocks, and the Ordering of Events in a Distributed System

Time, Clocks, and the Ordering of Events in a Distributed System
Leslie Lamport, Communications of the ACM, July 1978, pp. 558-565. 

I had not originally intended to cover this paper, but as I started trying to describe Implementing Atomic Actions on Decentralized Data, I realized that I needed to include it in order to better explain that work.  This is one of the papers for which Leslie Lamport was awarded the Turing Award.  In it, he is wrestling with the effects of relativity in computer systems.

What does this mean?  In essence, as soon as we have more than two distinct computer systems communicating with one another in a network, we must account for the fact that the order of events may now vary for each individual note.   Indeed, as it turns out, this effect is not restricted to distributed systems.  It can be seen in single computer systems with multiple processors as well (another area where Lamport was significantly involved) in the study of consistency models.

Networks tend to exacerbate these issues because the times involved often make it more apparent that there are issues. This will profoundly impact distributed systems (including distributed file systems, a notable and classic example of distributed systems) and we continue to wrestle with its manifestations 40 years after this paper.

Figure 1

The paper describes a system (Figure 1) in which three different components, labeled Process P, Process Q, and Process R, are sending messages between them.  Individual messages are labeled and numbered in sequence and the time between them is shown.  It is worth noting that the times are of both when the message was sent as well as when it was received.

This demonstrates that the observed order of events does not necessarily match what an external observer might see as the ordering.  Thus, for example, Process R receives messages in a different order than they were sent by Process Q.  This behavior might seem surprising initially, but is in fact a common occurrence.  This could be due to queue types, intermediate network components, scheduling delays, etc.

Thus, he asks the question: what is the correct ordering of events in this system?  How do we capture this and ensure that the system behaves as we expect? Figure 2

He introduces a nomenclature of describing the “happened before” relationship, establishing an ordering of events within the system.  He uses the → character to indicate this relationship.  Thus, if comes before b he writes “a → b“.  Similarly it is transitive operation: if a → b and b → c then a → c.  Finally he defines an operation as concurrent if there is no ordering between a and b.   This applies to operations within a single process as well as between processes.  He notes in this discussion that sending a message must have happened before receiving a message.

While this might sound simple, it helps us begin to reason about these events, since we can pay attention to the events that do have ordering constraints and ignore those that do not.  In many cases, we won’t care about the order of two operations because the net effect is the same, regardless of the order.  What we do want to do, however, is ensure that we understand ordering for those operations where it does matter.

Figure 2 then takes the same chart, but this time he adds the concept of a clock tick.  The dashed lines represents the tick of a clock; the ticks occurs between events.  He then defines the time ordering (“clock function”) is related to the → relationship.  If a → b then C(a)  < C(b). That is, the clock tick (time) is also well ordered.  He observes then that we can extend this definition to cover the prior case, where we look at the ordering of operations within a single process, as well as across processes: recall that a → b is required for messages sent between processes, where a is the send event and b is the receive event and thus C(a) < C(b) in this case as well.  He calls this the “Clock Condition”.

Figure 3 He then goes one step further: he flattens out the clock ticks, yielding Figure 3.  Now our clock ticks are uniform time and our events are within one of the clock tick intervals.  Since we separate dependent operations, we now have a model we can use to order events globally.

This was the point he was trying to make.  “Being able to totally order the events can be very useful in implementing a distributed system.”  This is a classic understatement and is the focus of much of distributed systems research and implementation to this day: how do you ensure that you have a definitive order of events.  We will see this when looking at later work.

At this point he focuses on more rigorously defining his system.  This allows him to note that this becomes a “distributed algorithm”.  He describes the replicated state machine that is being executed by all the nodes within the distributed system.  There is no central authority in this model forcing the ordering.

This is an important point not to miss: in Lamport’s distributed system there is no requirement for a centralized service.  He does not insist on a total ordering of events in the system and settles for a partial ordering – one in which he doesn’t worry about the events that aren’t sensitive to their ordering.  This is a powerful technique because it gives up total determinism; we gain performance through parallelism.

From this article we get the Lamport Clock which is not so much a clock as a monotonically increasing value that represents relative ordering.  Note that a hardware clock will satisfy this invariant requirement as well.

A Client-Based Transaction System to Maintain Data Integrity

A Client-Based Transaction System to Maintain Data Integrity
William H. Paxton, in Proceedings of the seventh ACM Symposium on Operating Systems Principles, 1979, pp 18-23.

Last time I discussed the basis of consistency and locks. I did so because I thought it would help explain this paper more easily. We now move into the nascent world of sharing data over the network (what today we might call distributed systems).  One of the challenges of this world is that it involves coordinating changes to information that involves multiple actors (the client and server) that has some sort of consistency requirement.  The authors here use the term integrity (rather than consistency) but I would suggest the terms are effectively the same for our purposes:

Integrity is a property of a total collection of data.  It cannot be maintained simply by using reliable primitives for reading and writing single units — the relations between the units are important also.

This latter point is important.  Even given atomic primitives, any time we need to update related state, we need to have some mechanism beyond a simple read or write mechanism.

The definitions offered by the author is highly consistent with what we saw previously:

  1. The consistency property: although many clients may be performing transactions simultaneously, each will get a consistent view of the shared data as if the transactions were being executed one at a time.
  2. The atomic property: for each transaction, either all the writes will be executed or none of them will, independent of crashes in servers or clients.

Prior work had focused on having the server handle these issue.  This paper describes how the client can accomplish this task.

There were some surprising gems here.  For example:

If a locked file is not accessed for a period of time, the server automatically releases the lock so that a crashed client will not leave files permanently unavailable.

We will see this technique used a number of times in the future and it is a standard technique for handling client-side locks in network file systems.  Thus, one novelty to the locks presented here is that they have a bounded lifetimeThus, one of the services required from the server is support for this style of locking.

The author then goes on to propose the use of an intention log (“intention file” in the paper).  The idea is that the client computer locks files, computes the set of changes, records them in the intention log, and then commits the changes.  Then the actual changes are applied.

To achieve this, it sets out six operations:

  • Begin Transaction – this is what reserves the intention file.  Note that the author’s model only permits a single transaction at a time. Note that the other operations fail if a transaction has not been started.
  • Open – this is what opens the actual data file.  At this point the file header is read.  If that header indicates the file is being used for a transaction, the file is recovered before the open can proceed.
  • Read – as it suggests, this reads data from the file.
  • Write – this prepares to write data to the file.  Since this is a mutation, the new data must be recorded in the the intention file at this point.  The actual file is not modified at this point.
  • Abort Transaction – this causes all files to be unlocked and closed.  No changes are applied.  The transaction becomes “completed”.
  • End Transaction – this is how a transaction is recorded and applied.  The paper describes how this is handled in some detail.  In all successful cases, the changes are applied to the server.

The balance of the paper then describes how crash recovery works.  The simplifying assumptions help considerably here – this system only permits a single transaction to proceed at a time.  Then the author moves on to a “sketch of correctness proof”.  I admit I did find that humorous as the attempt at formalism without actually invoking formalism achieves anything other than warm-fuzzy feelings.

He wraps up by discussing some complex cases, such as file creation and the challenge of “cleaning up” partially created file state. Having dealt with this issue over the years, I can assure you that it can be complex to get it implemented correctly. He also discusses that some files might have multi-block meta-data headers and explains how those should be handled as well.  He concludes with a discussion about handling media failure conditions, which are a class of failures that this system does not claim to protect against.

Granularity of Locks in a Shared Data Base

Granularity of locks in a shared data base,
Jim N. Gray, Raymond A. Lorie, Gianfranco R. Putzolu, in Proceedings of the 1st International Conference on Very Large Data Bases, pp. 428-451. ACM, 1975.

I was originally going to do a write-up of A Client-Based Transaction System to Maintain Data Integrity but realized that I should help motivate the use of locks and transactions better before diving into this topic area.  So to do this, I’ve picked to talk about locks.  This is a critical concept that we utilize in file systems development on a routine basis.  Note that this work actually started out in the database community, not the file systems community.

I will note that this is a long paper (23 pages) and there is considerable detail that I will omit for the sake of brevity – but I do want to touch on the key points that are raised in this paper.

They start off by defining consistency and transaction.  They define consistency first:

We assume the data base consists of a collection of records and constraints defined on these records.  There are physical constraints (ex: in a list of records, if a record A points to record B then record B must exist) as well as logical constraints (ex: conservation of money in a bank checking account application). When all such constraints are satisfied the data base is said to be consistent.

Then they move on to transaction:

transaction is a series of accesses (for read or write operations) to the data base which, applied to a consistent data base, will product a consistent data base.  During the execution of a transaction, the data base may be temporarily inconsistent.  The programs used to perform the transactions assume that they “see” a consistent data base.  So if several transactions are run concurrently, a locking mechanism must be used to insure that one transaction does not see temporarily inconsistent data cause by another transaction.  Also, even if there are no consistency constraints, locks must be used so that the updates of one transaction are not made available to others before the transaction completes.  Otherwise, transaction backup might cascade to other transactions which read or updated the “back up” updates.

This text does not lay out the modern description of transactions that we use (atomic, consistent, isolated, and durable or ACID) but what it describes is a system in which these properties do in fact exist.  This concept is not unique to databases – we see it in any multi actor system sharing resources.  The usual analogy that I use when describing this are traffic intersection based, as the intersection is a shared resource and the individual vehicles separate actors. lock free traffic flow

The optimal configuration for resource sharing is where the need to share is minimized.  If there is no shared data, there is no need for locks. Similarly, if the data is immutable (a concept I know we’ll return to over time) we don’t need to protect it.  Only mutable state needs to use locks.  As it turns out, frequently mutable state doesn’t actually change but because it can change, if we rely upon it, we must protect against change.

This concept of transitioning from one consistent state to another consistent state safely is why we use locks.  The primary contribution of the authors in this paper is not the creation of locks and consistency but rather their creation of a clear language that we still use to describe these concepts.

In Section 2 they then describe what a “lockable object” is.  There are several important observations here, which I summarize:

  • A transaction is sequentially consistent.  This is a very strong consistency guarantee (as we will see in later papers when things get far more general.)
  • The basic lock type is reader-writer.  A reader lock provides shared access to mutable state, along with the guarantee that the mutable state won’t change while the lock is held.  A writer lock provides exclusive access to mutable state, along with the guarantee that the only changes to that state will be made by the owner of the writer lock.
  • Locks may be used to protect non-existent resources, for example, to block the creation of a new object.  Since the object doesn’t exist, you can’t lock it yet (interestingly, I usually tend to think of that as locking the structure in which the new object exists, but their observation that you can lock non-existent items is certainly valid.
  • Locks are requested dynamically.
  • There is a dynamic balance between lock granularity and overhead.  If we lock many small objects, there is a cost associated with each of those lock operations.  In the years since this paper we have made uncontended lock acquisition rather inexpensive, but contended lock acquisition remains an expensive proposition.

The paper then goes on to discuss lock hierarchies.   This is because in any dynamic lock system in which you obtain more than a single lock, you need to ensure that there will never be a situation in which an actor blocks waiting for a lock that will never be released.  The simplest case of this is when an actor blocks waiting for a lock which it owns.  This is clearly a programming error, but it is one that I have seen numerous times over the years.  The more complex case of this is when we have a cycle in lock acquisition.  The paper points out that to prevent this we need to construct a directed acyclic graph showing the order in which locks are acquired.  Usually these are trees, but I have seen environments in which they really are DAGs, due to re-entrant behavior.

The paper describes their model for one type of lock, which is focused on locking within an hierarchical name space.  Thus, they have exclusive (X) access, shared (S) access, and intention (I) locks.  The intention lock is then used to protect each object along the lock hierarchy, with X or S access to the final node in the hierarchy.  The paper goes into greater detail about this; I’ll omit it because it is not really germane to what I found most interesting about this paper.

The final point that I’ll draw out of this paper is that they discuss the idea of lock scheduling, deadlock, and lock thrashing.  The lock hierarchy is intended on preventing deadlock.  Lock scheduling can be used to detect deadlocks.  Thrashing relates to memory contention and calls out to other operating system mechanisms for resolution.

With this behind us, we should have a better basis for understanding how we use transactions and locking to ensure consistency in file systems.

The DEMOS File System

The DEMOS File System
Michael L. Powell, In Proceedings of the sixth ACM symposium on Operating systems principles, pp. 33-42.

This paper delves into the nitty gritty details of constructing physical file systems.  I was surprised that it had relatively few citations (61 according to Google Scholar when I checked) because, having read it, I would hand this paper to someone asking me “what are file systems?”  I suspect that the more frequently cited paper in this area will be “A Fast File System for UNIX,” which cites to this paper.

The target for DEMOS is the CRAY-1 supercomputer, at the time the fastest computer in the world.  As a matter of comparison, modern mobile devices have more computational power (and often more I/O bandwidth) than the CRAY-1 did.

DEMOS Figures 1 and 2The author discusses the design of a new file system for use with a custom operating system for the Los Alamos National Laboratory’s CRAY-1 computer system.  A key for this project was that it seeks to improve performance as much as possible.  After all, why build a super-computer if you then cripple it with features that don’t enhance its performance?

What I find delightful about this paper is that it describes the basic constituent parts of a file system, as well as strategies for optimizing performance.  It does so in a clear and understandable fashion.DEMOS Figures 3 and 4

DEMOS utilizes a UNIX-like hierarchical file system model.  It has directories and files. It does not have the link model from Multics so paths to files are unique in DEMOS.  Files are managed in units of blocks (4096 bytes) but I/O is specified as bytes (interestingly, they specify eight bit bytes as nine bit machines were still in use.)

The authors discuss file sizes.  To the best of my knowledge this is one of the earliest papers covering this common subject  (which is revisited periodically because workloads change and file sizes also change).  One of the common themes I have seen in other work is mirrored here: most files are small.  Figure 1 shows a CDF for file sizes.  We note that the majority of files in their system are small, with approximately 75% being less than 1KB; this is consistent with later work as well.  Their second figure (Figure 2) describes the proportion of transfer sizes and their source.   We see a spike in the 100, perhaps 256 or 512 being “natural block sizes” that applications would use.

demos figure 5They establish lofty performance requirements: “[T]he file system will have to support a bandwidth of 20-60 megabits/second or higher”. Our performance requirements today would be much higher, but this recognizes the reality that then (as now) the I/O bandwidth of storage is often the rate limiting factor.

DEMOS is paired with a centralized storage facility (“Common File System” or CFS) that is to provide the function of what we would now think of as a centralized file server.  While not yet implemented by the time of the paper, their plan was to introduce automatic file migration and staging.

The central bit of the paper then describes the constituent parts of the file system.  This maps rather well onto what I have seen in the typical file system: a “request interpreter” that handles requests from applications.  Even their description is appropriate: “parameter validation and request translation”; a “buffer manager” that handles the allocation of buffer cache space (often virtual cache these days); and a “disk driver” that handles low level data operations, such as filling or storing the contents of buffers.

Figures 3 and 4 capture their insight into the disk manager.  This dovetails with their discussion about efficiency of I/O, including observations about queue management (“shortest seek time first” order for requests, and then sub-sorted by “shortest latency time first”).  This is a clear “hat tip” to the impact that rotational latency and track seek time has on performance.

Speaking of performance, the authors discuss this.  It leads to their observations on improving I/O performance: “I/O operations out to proceed in parallel with computation”.  Their point is that serializing these things decreases overall performance.  Their second observation: “[T]he length of time an I/O operation takes should be reduced as much as possible.”  This seems logical and is one reason why they use their optimized strategy.

There is a section on “file system buffering” that touches on the tradeoffs between using memory for buffer caching versus other possible uses.  Thus, the authors evaluate how increased buffering impacts their CPU utilization – this is in keeping with their goal of parallelizing I/O and computation.  Their observation?  The greatest benefit comes from a small number of buffers, in their analysis eight buffers provides most of the benefit. Building on that Figures 6 and 7, they observe there is a clear limit to the benefit of further buffering.  These days we do not think too much about this because we tend to use virtual caches, so the amount of physical memory is really managed by the virtual memory management code, yet the observation would likely still apply.  There is a limit to the benefit of buffering.

The authors also point out that disk allocation is a challenging.  They employ allocation bit maps, cluster allocations, over-allocate, and even use simplistic predictive read-ahead.  They refer to these as “strategy” routines.

In general, this is a great introduction to the basic structure of a media file system.  There are plenty of details that will be refined in later work.

 

 

 

The Cap File System

The Cap Filing System
R. M. Needham and A.D. Birrell, Symposium on Operating Systems Principles, 1977, Association for Computing Machinery.

I’ve fallen behind this past ten days, working towards a deadline.  My own conference paper is now submitted and I’m working on recovering.  What that means is this week is going to be a busy one as I work on catching up.  Adding to the fun, FAST is going on this week (February 13-16, 2008).  I hope to add some bonus discussions on the current work being presented there.

Let’s get to discussing this paper.

CAP is a capabilities based system.  The capability systems are a parallel idea that has been explored from time to time versus the access control system for Multics/UNIX inspired operating systems.  The latest contender in this space would be Fuschia, an experimental operating system that Google is developing (though it is an open source project under a mixture of licenses).  Interestingly, there are reports it runs on the Pixelbook in addition to the previous hardware it had supported.

At any rate, the idea behind a capability is to use identifiers that encapsulate the access rights within the capability itself.  Thus, it is a name and the virtue of having the name is that it means you have the access inherent in that name.

These capabilities all represent to a program the right to have or do something: the preservation of information from one run of a program to another (a universal operating system requirement) is thus seen by the CAP programmer as the preservation of a capability rather than of an object itself.

The paper describes how capabilities are managed, which interacts with the “filing system” of course, since it has to store persistent information.  They describe the “system internal name” (SIN), which is combined with a disk address to map memory segments to actual storage.  The capability (in a “directory”) then becomes a repository of these persistent objects.  This also creates a reference to the disk block that is in use, ensuring those storage regions are not reused.

One interesting characteristic of their system is that they do not guarantee free disk space will be recycled quickly (“[T]here is no guarantee that inaccessible disk space will be relinquished at the earliest possible moment.”)  Indeed they note that space reclamation is only done when the system reboots and “[T]he filing system is not designed to run forever.”

They discuss how CAP differs from prior work (e.g., OS/360) where only the name matters; there is no concept of a directory for use as part of the capability system.  The directory actually provides them an additional level of control as well, and they use directory capabilities as well as segment capabilities.  Directories may be persistent (so they have a SIN and disk block location) or ephemeral (so they disappear when the program exits) – a sort of “built in” temporary storage concept for ephemeral memory management.

Sharing objects is now trivial – you simply tell someone the correct name; that embeds the capability within it.  They do not describe the mechanisms used for transfer (“[B]y mechanisms which do not concern us in detail here.”)

They also describe how this name space is similar to other systems (UNIX and CAL-TSS) but different:

  • Access is associated with the name which in turn references information in the directory.  It is not an attribute of the file itself.
  • The name space need not be a “strict hierarchy”.  This means that portions could become disconnected, or even be private to a single application.
  • Their use of directories behaves similar to the model of “current directory” (presumably in UNIX) even though CAP expressly does not have a concept of current directory.
  • Directories are not even directed acyclic graphs!

The paper describes how capabilities work, since they are a fine-grained control mechanism.  They explain that the holder of an existing capability (a program) may generate a more restrictive capability to provide to another program.  Since capabilities apply to both individual files as well as directories, it is possible for a program to transfer a set of capabilities by creating a new directory and storing the relevant capabilities to the target program.

The names themselves can be quite ugly, as they incorporate capabilities within them.  They describe some conventions they employ in terms of name management and directory placement, but point out these are conventions and not a hard requirement of the system.

CAP certainly presents a rather different model of a file system than we see in other systems.  The idea of disconnected name spaces that are only visible to particular programs is an intriguing one.  They use the example of the password database, which requires the program have the password file capability.

They discuss security.  The directory manager is a separate module that programs use to interact with the directories to which they have access.  To invoke the directory manager, the caller must have the ENTER capability.

I find this to be exactly the type of thought provoking paper I had hoped to find as I comb through these old papers.  The idea that a file system name space need not be connected, that it could be private to a particular program or set of programs, and embedding access rights (“capabilities”) into the name will give me plenty to think about.

If you would like to know more about CAP there is a paper about it in the prior Symposium on Operating Systems Principles: The Cambridge CAP Computer and its Operating System.  It is not too surprising that this is available from Microsoft Research, as they also built a capability based operating system (or two): Singularity and Midori.

 

A Principle for Resilient Sharing of Distributed Resources

A Principle for Resilient Sharing of Distributed Resources
Peter A. Alsberg and John D. Day, In Proceedings of the 2nd international conference on Software engineering, pp. 562-570. IEEE Computer Society Press, 1976.

Today I turn my attention to a paper that begins to explore the issues surrounding distributed systems.  This paper sets forth some basic principles that should be considered as part of distributed systems that are worth capturing and pointing out.

They state that a “resilient server” must have four attributes:

  1. It must be able to detect and recover from some finite number of errors.
  2. It must be reliable enough that a user doesn’t need to worry about the possibility of service failure.
  3. Failure just beyond the finite limit are not catastrophic.
  4. One user’s abuse should not have material impact on any other user.

These all seem somewhat reasonable, though I must admit that I found (3) a bit surprising, as it is not uncommon for some classes of failures to cause catastrophic failure.  For example, when using erasure coding it is common for some number of failures to lead to catastrophic failure.  Upon reflection, I realized that one could achieve this goal by simply setting the finite limit a bit lower, though I suspect this is not entirely what the original author had in mind.

Figure 3
Figure 3

Still, the general concept of resiliency is a good one to capture and consider.  The authors point out some of the challenges inherent in this over a network, notably latency.  “[A] major consideration when choosing a resource sharing strategy is to reduce, as much as possible, the number of message delays required to effect the sharing of resources.”

In other words, keep the messages to a minimum, try not to make them synchronous.  Asynchronous messaging systems will turn out to be rather complex, however, and sometimes there are just operations that require synchronous behavior.

Of course, there has to be a file system tie-in here (however tenuous) and there is!  Under examples they list “Network Virtual File Systems which provide directory services and data access services…”  Thus, it is clear that the authors are considering network file systems as part of their work.

In 1976 the authors indicate that the cost to send a message is on the order of 100ms, while the cost to acquire a lock on the CPU is 100 microseconds to 1 millisecond.  While things are faster now, there is still a large difference between these two paths on modern systems.  Thus, we will still be dealing with issues on the same scale.

The bulk of the paper then walks through the description of their model for providing resilient network services – an application host, sends a request to a primary server host; that server host then communicates with a secondary server host.  That secondary server host can send backup requests to a tertiary server, and so forth.  The secondary host confirms with the primary host, and ultimately it is the secondary host that confirms the operation with the application host.

They cover a variety of important concepts, such as the idea that a host may need to record state about the operations, that operations cannot be applied until the other servers have received their messages, etc.  This is, in essence, an early consensus protocol. While not efficient, the authors have laid groundwork in helping us think about how we construct such services.

I have included Figure 3 from the paper above.  It shows the message flow through the system.  The paper also discusses how to handle a number of failure situations and how messages flowing through the system keep track of which nodes are current.

It also touches on one of the most complex issues in distributed systems: network partition.  Intriguingly, the authors do not assert that one partition must remain alive as they describe the decision being related to the specific requirements of the environment.  Thus, in their model it would be possible to end up with partitions that continue forward but can no longer be easily re-joined after the network partition is resolved.  Of course, they don’t require that both sides of a partition progress, though they do not discuss ways to handle this, either.

They assert that the primary/secondary/backup model is optimal in terms of the number of messages that it sends and in how it ensures the messages are strictly ordered.  Then they briefly describe a few other possible arrangements that have multiple primaries and/or secondaries.  Their final conclusion is that their original strategy is at least as good as the alternatives though this is not demonstrated in any formal way.

Now that they have their replication system, they view how it will work for the network virtual file system.  They conclude that the highest levels of the hierarchy need to be stored on all nodes (which makes them the most expensive to maintain).  They partition the names space below that and record location information within the nodes of the name space where the storage has been split out across hosts.  Thus, we see a description of a global name space.

Their system is simple, but they identify a number of the important issues.  Their suggestion about sharding the file systems name space is one that we shall see again in the future.

They have laid the groundwork for thinking about how to build distributed file systems.

Some Observations about Decentralization of File Systems

Some Observations about Decentralization of File Systems.
Jerome H. Saltzer, 1971.

This paper caught my eye because it leads in a different direction than the other file systems papers I’ve been looking at.  Instead of talking about file systems on a single computer, it has the audacity of suggesting that maybe we want to have remotely accessible storage.

The author frames this in the context of networks.  The first (of two) references in this paper are about networks and help frame the conversation about “decentralized” file systems:

Computer network development to achieve resource sharing
Lawrence G. Roberts and Barry D. Wessler, AFIPS ’70 (Spring) Proceedings of the May 5-7, 1970, spring joint computer conference, pp 543-549.

The authors’ affiliation is the Advance Research Project Agency (ARPA) and what they describe in this paper is the ARPA Network. I don’t want to get too far into this, but I did want to include this wonderful bandwidth/cost chart – something I have definitely seen in various guises since then.

ARPANet Performance
ARPA Network 1970 Performance Cost Comparison

In this time frame, they are discussing the creation of a network for sharing data across geographically dispersed sites, including sites scattered around the United States.  Connection speeds in this time frame are 50Kb/s.  It estimates the entire bandwidth consumption of the network will be between 200-800Kb/s by mid-1971, with twenty nodes connected to the entire network.

The point of this diagram is to discuss costs, where it points out that the least expensive way to move large quantities of data is to encode it on a magnetic tape and sent it in the mail (air mail, of course.)

Why is this important?  Because it helps set the stage for the conversation about resource sharing.  This is before Ethernet exists (Version 1.0 of the Ethernet specification appears in 1980).  Thus, the networks that do exist are hardware specific.  The amount of data being moved is remarkably small by modern standards.

This is, however, where we start considering what is involved in supporting network file systems – decentralized systems of storage that can communicate with one another.

The author’s stake out their position in the abstract:

This short note takes the position that the inherent complexity of a decentralized and a centralized information storage system are by nature essentially the same.

This defines the starting point of what will be a decades long conversation on this fundamental issue.   The authors’ argue that in fact the real issue is one of engineering, not models:

The consequence of this claim, if substantiated, is that the technical choice between a centralized or decentralized organization is one of engineering tradeoffs pertaining to maintainability, economics, equipment available, and the problem being solved, rather than one of functional properties or fundamental differences in complexity.

The discussion then points out that in some cases, such as adding a 20-40 millisecond delay on top of the usual 20-50 millisecond disk delay is not dramatically different.  They explore other areas where the timing might make a substantial difference.  Intriguingly, they discuss a form of disaggregation – where they envision compute being in one location, memory in another, storage in yet another.  They point out that this turns back into a computer (albeit one with long latency to access memory, for example.)

They then discuss caching (“buffer memory”) of information but point out the challenge is now that the system has multiple copies of what need to be the same data – it “has the problem of systematic management of multiple copies of information”.  Surprisingly they make a leap here equating this situation between centralized and decentralized systems: this is a problem of modeling access patterns to shared information and then invent algorithms for “having the information on the right storage device at the right time”!

With decades of hindsight, this looks surprisingly naive. Indeed, even the authors’ construct a clear path for retreat here: “… this is not to say that there are no differences of significance”.

They conclude by opining that storage is challenging:

The complexity is the inevitable consequence of the objectives of the information storage system: information protection, sharing, privacy, accessibility, reliability, capacity, and so on.  What is needed in both cases is a methodical engineering basis for designing an information storage system to specification.  On the other hand a decentralized organization offers the potential for both more administrative flexibility, and also more administrative chaos.

What surprised me about this paper is that the issues around sharing data between computers was already being considered at this point.  The idea that network file systems and local file systems really should be “more or less” the same is thus planted early. I’m sure we will see divergence on this point as we continue moving forward.

For now, we have a base upon which to build this new direction of “decentralized” file systems.

An Experimental Implementation of the Kernel/Domain Architecture

An Experimental Implementation of the Kernel/Domain Architecture
Michael J. Spier, Thomas N. Hastings, David N. Cutler,  ACM SIGOPS Operating Systems Review, vol. 7, no. 4, pp. 8-21. ACM, 1973.

While looking for file systems papers, one of the approaches that I used was to walk through the successive years of conference proceedings from the Symposium on Operating Systems Principles (SOSP).  This biennial conference has been going on for more than 50 years at this point and contains a wealth of interesting and influential papers from the operating systems domain.  It is common to find file systems papers at these conferences.

This particular gem comes from the fourth such symposium (1973).  While it discusses various aspects of operating systems, which makes it quite interesting anyway, but it also discusses file systems within the context of this experimental operating system that may prove insightful.

This paragraph caught my eye:

On our proposed architecture with its many monitor-like domains we envisioned the possibility of supporting the concurrent existence of similar purpose redundant supervisory services (e.g., 3 file systems, 7 file nomenclature hierarchies, 4 login responders, etc.) which, for all practical intents and purposes, would provide the external appearance of concurrent operating systems of disparate natures.

This is the first time I have seen a reference to the concept of having multiple file systems simultaneously as an explicit part of the operating system model.  It also describes this intriguing concept of “concurrent operating systems of disparate natures”.  One of the authors (David N. Cutler) goes on to work on the construction of several different operating systems including what is now Windows today – and it’s architecture is one in which it was designed to provide precisely this type of concurrent disparate operating system environments.  Windows also supports multiple dynamically loaded file systems which now does not look so novel, as essentially all modern operating systems do.

Indeed, what they go on to describe is the “domain machine” in which the monolithic operating system has been divided up into distinct components, which constitute distinct supervisory domains.  One of these domains represents the small set of functionality necessary to manage the actual hardware,   The goal was to keep it small and bug free.

Complicating this was the author’s observation that the system must be re-entrant in order to avoid deadlock.  This was a deliberate goal and they even go so far as to point out this behavior when involving the file system – the kernel may need to invoke the file system to handle demand loading of a domain, but the file system must invoke the kernel in order to access the disk where the data is stored.

Another of their observations intrigued me: they separate the “traditional file system” into a storage layer (“disk space manager” as they call it) and a “name space manager”.  They admit the possibility that this name space might be hierarchical.  They also explicitly note they could support “differently flavored file systems”.

In scheduling, they touch on the fact that I/O bound processes (e.g., the file system) benefit from being given real-time scheduling priority on traditional operating systems; in this domain system they do not need to do so.

This paper definitely surprised me, as I see the germs of ideas here that will be clearly realized in later systems work.  Some of these ideas clearly survive and emerge later, including those that affect file systems.

As we will see later, this logical divide of file systems actually naturally develops along the storage line, though the name space is not so clearly delineated.  Perhaps it should be.

While not directly germane to file systems, Dave Cutler goes on to work on RSX-11M and VMS (at Digital Equipment Corporation), as well as Windows NT (at Microsoft).  The latter became the basis of the Windows OS beginning with Windows XP.