{"id":1561,"date":"2015-04-23T05:00:24","date_gmt":"2015-04-23T12:00:24","guid":{"rendered":"http:\/\/zackmdavis.net\/blog\/?p=1561"},"modified":"2015-04-21T22:14:14","modified_gmt":"2015-04-22T05:14:14","slug":"the-foundations-of-erasure-codes","status":"publish","type":"post","link":"http:\/\/zackmdavis.net\/blog\/2015\/04\/the-foundations-of-erasure-codes\/","title":{"rendered":"The Foundations of Erasure Codes"},"content":{"rendered":"<p><a href=\"https:\/\/swiftstack.com\/blog\/2015\/04\/20\/the-foundations-of-erasure-codes\/\">(cross-posted from the <em>SwiftStack Blog<\/em>)<\/a><\/p>\n<blockquote><p>\nIn enabling mechanism to combine together general symbols, in successions of unlimited variety and extent, a uniting link is established between the operations of matter and the abstract mental processes of the most abstract branch of mathematical science. A new, a vast, and a powerful language is developed for the future use of analysis, in which to wield its truths so that these may become of more speedy and accurate practical application for the purposes of mankind [<em>sic<\/em>] than the means hitherto in our possession have rendered possible.\n<\/p><\/blockquote>\n<div style=\"text-align: right\">\n\u2014<a href=\"http:\/\/www.fourmilab.ch\/babbage\/sketch.html\">Ada Lovelace<\/a> on Charles Babbage's <a href=\"http:\/\/en.wikipedia.org\/wiki\/Analytical_Engine\">Analytical Engine<\/a>, 1842\n<\/div>\n<p><\/p>\n<p>Dear reader, if you're reading [<a href=\"https:\/\/swiftstack.com\/blog\/\">the <em>SwiftStack Blog<\/em><\/a>], you may have already <a href=\"https:\/\/swiftstack.com\/blog\/2013\/07\/17\/erasure-codes-with-openstack-swift-digging-deeper\/\">heard that<\/a> <em>erasure codes<\/em> <a href=\"https:\/\/github.com\/openstack\/swift\/blob\/f6628ae2\/CHANGELOG#L3-25\">have been added to<\/a> OpenStack Swift (<a href=\"http:\/\/docs.openstack.org\/developer\/swift\/overview_erasure_code.html#beta-not-production-ready\">in beta<\/a> for the 2.3.0 Kilo release, with continuing improvements thereafter) and that this is a really great thing that will make the world a better place.<\/p>\n<p>All of this is entirely true. But what is perhaps less widely heard is <em>exactly<\/em> what erasure codes are and <em>exactly<\/em> why their arrival in Swift is a really great thing that will make the world a better place. That is what I aim to show you in this post\u2014and I <em>do<\/em> mean show, not merely tell, for while integrating erasure codes into a production-grade storage system is (was!) an immense effort requiring months of work by some of the finest programmers the human race has to offer, the core idea is actually simple enough to fit in a (longish) blog post. Indeed, by the end of this post, we will have written a complete working implementation of a simple variant of <a href=\"http:\/\/en.wikipedia.org\/wiki\/Reed%E2%80%93Solomon_error_correction\">Reed\u2013Solomon coding<\/a>, not entirely unlike what is used in Swift itself. No prior knowledge will be assumed except a working knowledge of high-school algebra and the Python programming language.<\/p>\n<p><!--more--><\/p>\n<p>But first, we need to understand the problem that erasure codes solve. The strategy Swift has traditionally used to achieve its reliability and fault-tolerance properties is <em>replication<\/em>: keep more than one copy of each object (typically three), <a href=\"https:\/\/swiftstack.com\/blog\/2013\/02\/25\/data-placement-in-swift\/\">preferably<\/a> in entirely different datacenters, failing that, on different machines, and failing that, <em>at least<\/em> on different hard drives. Your data is safe from occasional drive failures because the probability of <em>all<\/em> the drives containing a particular object failing at the same time is very, very small.<\/p>\n<p>The problem with replication is that it's expensive: if you keep three replicas, then for every terabyte that you want to use, you have to pay for <em>three<\/em> terabytes of actual physical storage. The cost would <em>appear<\/em> to be unavoidable, unless ... <em>unless<\/em> there were some way to reap the benefits of distributing the information across different failure domains without storing the <em>entire<\/em> object at each location ...<\/p>\n<p>&quot;But surely this is impossible!&quot; I hear you cry. &quot;It's useless to make <em>half<\/em> a copy of something, because you can't know in advance of a disaster whether the half you made a backup of is the half that will need to be restored. In order to enjoy the safety of having a spare, you need a spare of the <em>whole thing<\/em>.&quot;<\/p>\n<p>My dear reader, this objection is compelling, well-stated\u2014and gloriously, one-hundred-percent <em>wrong<\/em>. We <em>can<\/em> achieve reliability guarantees similar to that of the replication strategy, keeping our data safe even as some of its fragments are damaged, lost, or erased. (Hence the name, <em>erasure codes<\/em>.) The method will have its own costs in the form of increased CPU load and more network requests; it won't make sense for all use cases, but when appropriate, the efficiency gain is impressive. It all depends on applying a deep philosophical insight into the nature of space itself.<\/p>\n<p>Specifically: <em>two points make a line<\/em>.<\/p>\n<p>Given any two distinct points on a plane, there is one and exactly one line that passes through both of them. We reconstruct anything we might want to know about a particular line just by remembering two points that it passes through.<\/p>\n<p>But suppose we were to remember <em>three<\/em> points. Then we could still reconstruct the line from any two of them, which means that the information about our line hasn't been lost even if we forget one of the points.<\/p>\n<p><a href=\"http:\/\/zackmdavis.net\/blog\/wp-content\/uploads\/2015\/04\/polynomial.png\"><img decoding=\"async\" loading=\"lazy\" src=\"http:\/\/zackmdavis.net\/blog\/wp-content\/uploads\/2015\/04\/polynomial.png\" alt=\"polynomial\" width=\"321\" height=\"255\" class=\"alignright size-full wp-image-1562\" srcset=\"http:\/\/zackmdavis.net\/blog\/wp-content\/uploads\/2015\/04\/polynomial.png 321w, http:\/\/zackmdavis.net\/blog\/wp-content\/uploads\/2015\/04\/polynomial-300x238.png 300w\" sizes=\"(max-width: 321px) 100vw, 321px\" \/><\/a><\/p>\n<p>Similarly, three points make a parabola, four points make a cubic curve, and in full generality, <em>m<\/em>+1 points make a degree-<em>m<\/em> polynomial. Given <em>n<\/em> points on a polynomial curve where <em>n<\/em> is greater than <em>m<\/em>+1, <em>any<\/em> <em>m<\/em>+1 of them suffice to reconstruct the polynomial.<\/p>\n<p>Thus, we have a clear strategy for storing data in a reliable, failure-tolerant way, without going to the expense of storing complete replicas: all we have to do is pretend our data is made out of polynomials, and store more points than are strictly necessary to reconstruct the data.<\/p>\n<p>But don't take <em>my<\/em> word for it! Mere verbal arguments can be deceptive, but <a href=\"http:\/\/en.wikipedia.org\/wiki\/Curry%E2%80%93Howard_correspondence\">code is proof<\/a> and code is truth, so if you still doubt that such an idea can really be made to work\u2014and maybe you should\u2014you won't after we're done implementing it.<\/p>\n<p>So suppose we want to save some textual data; say, the string, &quot;THE FUNDAMENTAL PROBLEM OF COMMUNICATION IS THAT OF REPRODUCING AT ONE POINT EITHER EXACTLY OR APPROXIMATELY A MESSAGE SELECTED AT ANOTHER POINT&quot;. Now, this data is made out of letters and spaces, not polynomials, but we can process it into a form that will make it easier to make-believe that it is. Say, we split the text into chunks of a fixed size (padding the end with extra spaces if necessary so that our chunk size evenly divides it), and convert the characters into integers from 0 to 26 (space is 0, <em>A<\/em> is 1, <em>B<\/em> is 2, <em>&amp;c.<\/em>). Here are some functions to help with that\u2014<\/p>\n<pre><code>from string import ascii_uppercase\r\n\r\nALPHABET = &quot; &quot;+ascii_uppercase\r\nCHAR_TO_INT = dict(zip(ALPHABET, range(27)))\r\nINT_TO_CHAR = dict(zip(range(27), ALPHABET))\r\n\r\ndef pad(text, chunk_size):\r\n    return text + &#039; &#039;*(chunk_size - len(text) % chunk_size)\r\n\r\ndef chunkify(text, chunk_size):\r\n    return [text[i:i+chunk_size]\r\n            for i in range(0, len(text), chunk_size)]\r\n\r\ndef convert(string):\r\n    return [CHAR_TO_INT[c] for c in string]<\/code><\/pre>\n<p>After turning our text into converted chunks (lists of integers), we can interpret each chunk as representing the coefficients of a polynomial function: say, in order of increasing degree, so that, <em>e.g.<\/em>, the list [1, 2, 3] represents the function 1 + 2<em>x<\/em> + 3<em>x<\/em><sup>2<\/sup>. Then we can take points on that polynomial at <em>n<\/em> different values of the independent variable <em>x<\/em> for some <em>n<\/em> greater than the chunk size to get a properly redundant encoding.<\/p>\n<p>(It's actually better if you use polynomials over the <em>finite field<\/em> &#x1d53d;<sub><em>q<\/em><\/sub> of the integers modulo <em>q<\/em> for some <em>q<\/em> which is a prime raised to the power of something, but let's not worry about that.)<\/p>\n<pre><code>def evaluate_polynomial(coefficients, x):\r\n    return sum(c * x**i for i, c in enumerate(coefficients))\r\n\r\ndef encode(chunk, n):\r\n    return [evaluate_polynomial(chunk, i) for i in range(n)]\r\n\r\ndef erasure_code(text, chunk_size, encoded_chunk_size):\r\n    chunks = chunkify(pad(text, chunk_size), chunk_size)\r\n    converted_chunks = [convert(chunk) for chunk in chunks]\r\n    return [list(enumerate(encode(chunk, encoded_chunk_size)))\r\n            for chunk in converted_chunks]<\/code><\/pre>\n<p>Then, with a choice for the original chunk size (which you'll recall will also be the number of terms each in the polynomials used to encode each chunk) and the size of the resulting encoded chunk (that is, the number of points we'll sample from the polynomials), we can encode our text.<\/p>\n<pre><code>$ python3\r\n&gt;&gt;&gt; from reed_solomon import *\r\n&gt;&gt;&gt; text = &quot;THE FUNDAMENTAL PROBLEM OF COMMUNICATION IS THAT OF\r\nREPRODUCING AT ONE POINT EITHER EXACTLY OR APPROXIMATELY A MESSAGE\r\nSELECTED AT ANOTHER POINT&quot;\r\n&gt;&gt;&gt; encoded = erasure_code(text, 5, 8)\r\n&gt;&gt;&gt; encoded\r\n[[(0, 20), (1, 39), (2, 152), (3, 575), (4, 1668), (5, 3935), (6,\r\n8024), (7, 14727)], [(0, 21), (1, 53), (2, 281), (3, 1179), (4, 3533),\r\n(5, 8441), (6, 17313), (7, 31871)], [(0, 5), ...<\/code><\/pre>\n<p><em>[further output redacted]<\/em><\/p>\n<p>At this point, our text has been transformed into a Python list, whose elements are Python lists representing the individual chunks, whose elements are tuples representing (<em>x<\/em>, <em>y<\/em>) coordinate pairs representing points on the polynomial representing that chunk.<\/p>\n<p>Let's simulate distributing that encoded information across several storage nodes by writing points with different <em>x<\/em>-values to different files. We'll make-believe that each file is a different storage node. We'll write another function for that.<\/p>\n<pre><code>import json\r\n\r\ndef disperse(encoded_chunks):\r\n    node_count = len(encoded_chunks[0])\r\n    for i in range(node_count):\r\n        with open(&#039;node&#039;+str(i), &#039;w&#039;) as node:\r\n            node.write(json.dumps([chunk[i] for chunk in encoded_chunks]))<\/code><\/pre>\n<p>And try it out\u2014<\/p>\n<pre><code>&gt;&gt;&gt; disperse(encoded)\r\n&gt;&gt;&gt;\r\n$ ls\r\nnode0  node1  node2  node3  node4  node5  node6  node7\r\n$ cat node4\r\n[[4, 1668], [4, 3533], [4, 3517], [4, 1824], [4, 4080], [4, 4342],\r\n[4, 1665], [4, 4769], [4, 5460], [4, 4172], [4, 4710], [4, 2254], [\r\n4, 433], [4, 2436], [4, 4464], [4, 5796], [4, 1596], [4, 4428], [4,\r\n 1417], [4, 5313], [4, 5452], [4, 709], [4, 6212], [4, 4973], [4, 5\r\n445], [4, 5205], [4, 6308], [4, 4412], [4, 1555]]<\/code><\/pre>\n<p>In conclusion, that's how you use Reed\u2013Solomon coding to turn comprehensible English text into inscrutable lists of lists of numbers distributed across several files. Thank you, and\u2014<\/p>\n<p>What's that you say, dear reader? Demonstrating how to encode something is useless unless you also demonstrate how to decode it? Well, I suppose you may have a point. Never fear\u2014we can do that, too! But first, we'll need some functions for manipulating polynomials (in the &quot;list of coefficients in order of ascending power&quot; form that we've been using).<\/p>\n<pre><code>def get_coefficient(P, i):\r\n    if 0 &lt;= i &lt; len(P):\r\n        return P[i]\r\n    else:\r\n        return 0\r\n\r\ndef add_polynomials(P, Q):\r\n    n = max(len(P), len(Q))\r\n    return [get_coefficient(P, i) + get_coefficient(Q, i) for i in range(n)]\r\n\r\ndef scale_polynomial(P, a):\r\n    return [a*c for c in P]\r\n\r\ndef multiply_polynomials(P, Q):\r\n    maximum_terms = len(P) + len(Q)\r\n    R = [0 for _ in range(maximum_terms)]\r\n    for i, c in enumerate(P):\r\n        for j, d in enumerate(Q):\r\n            R[i+j] += c * d\r\n    return R<\/code><\/pre>\n<p>Once we can do arithmetic with polynomials, we can write functions to reconstruct the polynomial representing a chunk of our text given our saved points, which is probably the most intricate part of this entire endeavor. We'll use a technical trick called <em>Lagrange interpolation<\/em>, after the great mathematician-astronomer <a href=\"http:\/\/en.wikipedia.org\/wiki\/Joseph-Louis_Lagrange\">Joseph-Louis Lagrange<\/a>.<\/p>\n<p>Suppose we want to reconstruct a cubic polynomial from the four points (<em>x<sub>1<\/sub><\/em>, <em>y<sub>1<\/sub><\/em>), (<em>x<sub>2<\/sub><\/em>, <em>y<sub>2<\/sub><\/em>), (<em>x<sub>3<\/sub><\/em>, <em>y<sub>3<\/sub><\/em>), and (<em>x<sub>4<\/sub><\/em>, <em>y<sub>4<\/sub><\/em>). It turns out that a formula for the polynomial is<\/p>\n<p><em>y<sub>1<\/sub><\/em>\u2113<sub>1<\/sub>(<em>x<\/em>) + <em>y<sub>2<\/sub><\/em>\u2113<sub>2<\/sub>(<em>x<\/em>) + <em>y<sub>3<\/sub><\/em>\u2113<sub>3<\/sub>(<em>x<\/em>) + <em>y<sub>4<\/sub><\/em>\u2113<sub>4<\/sub>(<em>x<\/em>)<\/p>\n<p>where \u2113<sub>1<\/sub>(<em>x<\/em>) (the <em>first Lagrange basis element<\/em>) stands for<\/p>\n<p>((<em>x<\/em> \u2013 <em>x<\/em><sub>2<\/sub>)(<em>x<\/em> \u2013 <em>x<\/em><sub>3<\/sub>)(<em>x<\/em> \u2013 <em>x<\/em><sub>4<\/sub>)) \/ ((<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>2<\/sub>)(<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>3<\/sub>)(<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>4<\/sub>))<\/p>\n<p>and so on\u2014for each <em>i<\/em> between 1 and the number of points we have, the numerator of the <em>i<\/em>th Lagrange basis element is the product of (<em>x<\/em> \u2013 <em>x<\/em><sub><em>j<\/em><\/sub>) for all <em>j<\/em> from 1 up to the number of points we have but not equal to <em>i<\/em>, and the denominator follows a similar pattern but with <em>x<\/em><sub><em>i<\/em><\/sub> instead of <em>x<\/em>. (Note that we're using letters with subscripts, like <em>x<\/em><sub><em>i<\/em><\/sub>, to represent specific constants, whereas <em>x<\/em> without a subscript is a function's independent variable.)<\/p>\n<p>I hear you ask, &quot;But why <em>this particular<\/em> arbitrary-looking formula out of the space of all possible arbitrary-looking formulae?&quot; But the grace and beauty of this formula is exactly that it's engineered specifically to pass through our points. Consider what happens when we choose <em>x<\/em> equal to <em>x<\/em><sub>1<\/sub>. The second through fourth terms <em>y<\/em><sub>2<\/sub>\u2113<sub>2<\/sub>(<em>x<\/em><sub>1<\/sub>) through <em>y<\/em><sub>4<\/sub>\u2113<sub>4<\/sub>(<em>x<\/em><sub>1<\/sub>) all contain a factor of (<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>1<\/sub>) and are thus zero, but the first term becomes<\/p>\n<p><em>y<\/em><sub>1<\/sub> ((<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>2<\/sub>)(<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>3<\/sub>)(<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>4<\/sub>)) \/ ((<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>2<\/sub>)(<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>3<\/sub>)(<em>x<\/em><sub>1<\/sub> \u2013 <em>x<\/em><sub>4<\/sub>))<\/p>\n<p>= <em>y<\/em><sub>1<\/sub>(1)<\/p>\n<p>= <em>y<\/em><sub>1<\/sub>.<\/p>\n<p>So by design, our interpolated polynomial takes value <em>y<\/em><sub>1<\/sub> at <em>x<\/em><sub>1<\/sub>, <em>y<\/em><sub>2<\/sub> at <em>x<\/em><sub>2<\/sub>, and so forth. In Python, the whole process looks like this\u2014<\/p>\n<pre><code>def lagrange_basis_denominator(xs, i):\r\n    denominator = 1\r\n    for j, x in enumerate(xs):\r\n        if j == i:\r\n            continue\r\n        denominator *= xs[i] - xs[j]\r\n    return denominator\r\n\r\ndef lagrange_basis_element(xs, i):\r\n    element = [1]\r\n    for j in range(len(xs)):\r\n        if j == i:\r\n            continue\r\n        element = multiply_polynomials(element, [-xs[j], 1])\r\n    scaling_factor = 1\/lagrange_basis_denominator(xs, i)\r\n    return scale_polynomial(element, scaling_factor)\r\n\r\ndef interpolate(points):\r\n    result = [0]\r\n    xs, ys = zip(*points)\r\n    for i in range(len(points)):\r\n        result = add_polynomials(\r\n            result,\r\n            scale_polynomial(lagrange_basis_element(xs, i), ys[i])\r\n        )\r\n    return [round(k) for k in result]<\/code><\/pre>\n<p>(Note that we're rounding off our computed coefficients because this implementation isn't very <em>numerically stable<\/em>\u2014the subtle differences between true real-number arithmetic and the approximate floating-point arithmetic implemented by computers start to accumulate, and if we choose too large of a chunk size, our program will actually start giving the wrong answers\u2014but let's not worry about that, either.)<\/p>\n<p>With this technique, we now have all the tools we need to recover our text from a subset of the data we wrote to our various &quot;nodes&quot; earlier. What we need to do is this: for each chunk, arbitrarily select a number of stored points equal to our chunk size, <em>interpolate<\/em> the polynomial from them, <em>deconvert<\/em> the numbers which are the coefficients of that polynomial back into their character equivalents, <em>unchunkify<\/em> the chunks into a unified whole, and <em>unpad<\/em> any whitespace we added to the end when we began.<\/p>\n<pre><code>def deconvert(sequence):\r\n    return &#039;&#039;.join(INT_TO_CHAR[i] for i in sequence)\r\n\r\ndef unchunkify(chunks):\r\n    return &#039;&#039;.join(chunks)\r\n\r\ndef unpad(text):\r\n    return text.rstrip()\r\n\r\ndef erasure_decode(encoded_chunks, chunk_size, encoded_chunk_size):\r\n    converted_chunks = [interpolate(chunk[:chunk_size])[:chunk_size]\r\n                        for chunk in encoded_chunks]\r\n    return unpad(unchunkify(deconvert(chunk) for chunk in converted_chunks))<\/code><\/pre>\n<p>But <em>about<\/em> that data that we wrote out earlier.<\/p>\n<pre><code>$ ls\r\nnode0  node1  node2  node3  node4  node5  node6  node7<\/code><\/pre>\n<p>It would hardly be a compelling test of our erasure-coding skills if there were any suspicion that we actually needed <em>all<\/em> of those files\u2014we really only need as many as our chunk size. So let's suppose that three of our nodes die in a fire\u2014<\/p>\n<pre><code>$ rm node1 node3 node6\r\n$ ls\r\nnode0  node2  node4  node5  node7<\/code><\/pre>\n<p>Could this the end of our data? With a full three-eighths of our encoding having been utterly destroyed, is it delusional to hold out hope that our text might yet be faithfully recovered? No! No, it is not! We only need one more function to retrieve the encoded chunks\u2014<\/p>\n<pre><code>def retrieve(*nodes):\r\n    responses = []\r\n    for node in nodes:\r\n        with open(node) as our_node:\r\n            responses.append(json.loads(our_node.read()))\r\n    return [[response[i] for response in responses]\r\n            for i in range(len(responses[0]))]<\/code><\/pre>\n<p>\u2014and then\u2014<\/p>\n<pre><code>$ python3\r\n&gt;&gt;&gt; from reed_solomon import *\r\n&gt;&gt;&gt; node_data = retrieve(&quot;node0&quot;, &quot;node2&quot;, &quot;node4&quot;, &quot;node5&quot;, &quot;node7&quot;)<\/code><\/pre>\n<p>\u2014successfully decode them!<\/p>\n<pre><code>&gt;&gt;&gt; erasure_decode(node_data, 5, 8)\r\n&#039;THE FUNDAMENTAL PROBLEM OF COMMUNICATION IS THAT OF REPRODUCING AT\r\n ONE POINT EITHER EXACTLY OR APPROXIMATELY A MESSAGE SELECTED AT AN\r\nOTHER POINT&#039;<\/code><\/pre>\n<p>Dear reader, it is true our toy implementation here was crude, the hundred-and-change bytes of data we demonstrated it on was of no intrinsic interest, and many obvious and not-so-obvious subtleties were ignored. But I implore you to consider the implications for your own storage needs of more advanced, not-merely-educational application of these vast and powerful techniques. Imagine just how soundly you'll be able to sleep at night knowing that you're well under your budget and yet your data is just as safe as if you had three complete independent copies of it!<\/p>\n<p>And even if you personally have no intention of deploying Swift\u2014as my SwiftStack colleague and project technical lead for OpenStack Swift John Dickinson has pointed out, we are rapidly entering an era in which everyone uses object storage, whether they realize it or not. In a hyperconnected global economy, even minor efficiency improvements in key infrastructure components can reap enormous benefits elsewhere, which is to say that the rest of your life will contain more happiness and less pain if the financial institution that invests your retirement savings, or the medical research institute that develops a cure for the cancer you'll get twenty years from now, or the image hosting service that serves you cute cat pictures today, have access to cheaper, faster, and more reliable storage than the means hitherto in our possession have rendered possible. And that's why erasure codes being in OpenStack Swift is a really great thing that will make the world a better place; <em>quod erat demonstrandum<\/em>.<\/p>\n<p>The code in this post is <a href=\"https:\/\/gist.github.com\/zackmdavis\/ebc08cf7913fcd9f6796\">available separately<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>(cross-posted from the SwiftStack Blog) In enabling mechanism to combine together general symbols, in successions of unlimited variety and extent, a uniting link is established between the operations of matter and the abstract mental processes of the most abstract branch &hellip; <a href=\"http:\/\/zackmdavis.net\/blog\/2015\/04\/the-foundations-of-erasure-codes\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[20],"tags":[73,21],"_links":{"self":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts\/1561"}],"collection":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/comments?post=1561"}],"version-history":[{"count":6,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts\/1561\/revisions"}],"predecessor-version":[{"id":1568,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts\/1561\/revisions\/1568"}],"wp:attachment":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/media?parent=1561"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/categories?post=1561"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/tags?post=1561"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}