{"id":2467,"date":"2026-02-13T10:02:00","date_gmt":"2026-02-13T18:02:00","guid":{"rendered":"http:\/\/zackmdavis.net\/blog\/?p=2467"},"modified":"2026-02-13T10:02:00","modified_gmt":"2026-02-13T18:02:00","slug":"hazards-of-selection-effects-on-approved-information","status":"publish","type":"post","link":"http:\/\/zackmdavis.net\/blog\/2026\/02\/hazards-of-selection-effects-on-approved-information\/","title":{"rendered":"Hazards of Selection Effects on Approved Information"},"content":{"rendered":"<p>In a busy, busy world, there\u2019s so much to read that no one could possibly keep up with it all. You can\u2019t <em>not<\/em> prioritize what you pay attention to and (even more so) what you respond to. Everyone and her dog tells herself a story that she wants to pay attention to \u201cgood\u201d (true, useful) information and ignore \u201cbad\u201d (false, useless) information.<\/p>\n<p>Keeping the story true turns out to be a harder problem than it sounds. Everyone and her dog knows that the map is not the territory, but the reason we need a whole slogan about it is because we never actually have unmediated access to the territory. Everything we think we know about the territory is actually just part of our map (the world-simulation our brains construct from sensory data), which makes it easy to lose track of whether your actions are improving the real territory, or just your view of it on your map.<\/p>\n<p>For example, I like it when I have good ideas. It makes sense for me to like that. I endorse taking actions that will result in world-states in which I have good ideas.<\/p>\n<p>The problem is that I might not be able to tell the difference between world-states in which I have good ideas, and world-states in which I <em>think<\/em> my ideas are good, but they\u2019re actually bad. Those two different states of the territory would look the same on my map.<\/p>\n<p>If my brain\u2019s learning algorithms reinforce behaviors that lead to me having ideas that I think are good, then in addition to learning behaviors that make me have better ideas (like reading a book), I might also inadvertently pick up behaviors that prevent me from hearing about it if my ideas are bad (like silencing critics).<\/p>\n<p>This might seem like an easy problem to solve, because the most basic manifestations of the problem are in fact pretty easy to solve. If I were to throw a crying fit and yell, \u201cCritics bad! No one is allowed to criticize my ideas!\u201d every time someone criticized my ideas, the problem with that would be pretty obvious to everyone and her dog, and I would stop getting invited to the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Salon_(gathering)\">salon<\/a>.<\/p>\n<p>But what if there were subtler manifestations of the problem, that <em>weren\u2019t<\/em> obvious to everyone and her dog? Then I might keep getting invited to the salon, and possibly even spread the covertly dysfunctional behavior to other salon members. (If they saw the behavior seeming to work for me, they might imitate it, and their brain\u2019s learning algorithms would reinforce it if it seemed to work for them.) What might those look like? Let\u2019s try to imagine.<\/p>\n<h2 id=\"filtering-interlocutors\">Filtering Interlocutors<\/h2>\n<blockquote><p><strong>Goofusia<\/strong>: I don\u2019t see why you tolerate that distrustful witch Goody Osborne at your salon. Of course I understand the importance of criticism, which is an essential nutrient for any truthseeker. But you can acquire the nutrient without the downside of putting up with unpleasant people like her. At least, I can. I\u2019ve already got plenty of perceptive critics in my life among my friends who want the truth, and know that I want the truth\u2014who will assume my good faith, because they know my heart is in the right place.<\/p>\n<p><strong>Gallantina<\/strong>: But aren\u2019t your friends who know you want the truth selected for agreeing with you, over and above their being selected for being correct? If there <em>were<\/em> some crushing counterargument to your beliefs that would only be found by someone who <em>didn\u2019t<\/em> know that you want the truth and <em>wouldn\u2019t<\/em> assume good faith, how would you ever hear about it?<\/p><\/blockquote>\n<p>This one is subtle. Goofusia isn\u2019t throwing a crying fit every time a member of the salon criticizes her ideas. And indeed, you can\u2019t invite the whole world to your salon. You can\u2019t <em>not<\/em> do some sort of filtering. The question is whether salon invitations are being extended or withheld for \u201cgood\u201d reasons (that promote the salon processing true and useful information) or \u201cbad\u201d reasons (that promote false or useless information).<\/p>\n<p>The problem is that being friends with Goofusia and \u201cknow[ing] that [she and other salon members] want the truth\u201d is a bad membership criterion, not a good one, because people who aren\u2019t friends with Goofusia and don\u2019t know that she wants the truth are likely to have different things to say. Even if Goofusia can answer all the critiques her friends can think of, that shouldn\u2019t give her confidence that her ideas are solid, if there are likely to exist serious critiques that wouldn\u2019t be independently re\u00efnvented by the kinds of people who become Goofusia\u2019s friends.<\/p>\n<p>The \u201cnutrient\u201d metaphor is a tell. Goofusia seems to be thinking of criticism as if it were a homogeneous ingredient necessary for a healthy epistemic environment, but that it doesn\u2019t particularly matter where it comes from. In analogy, it doesn\u2019t matter whether you get your allowance of potassium from bananas or potatoes or artificial supplements. If you find bananas and potatoes unpleasant, you can still take supplements and get your potassium that way; if you find Goody Osborne unpleasant, you can just talk to your friends who know you want the truth and get your criticism that way.<\/p>\n<p>But unlike chemically uniform nutrients, criticism isn\u2019t homogeneous: different critics are differently equipped by virtue of their different intellectual backgrounds to notice different flaws in a piece of work. The purpose of criticism is not to virtuously endure being criticized; the purpose is to surface and fix every individual flaw. (If you independently got everything exactly right the first time, then there would be nothing for critics to do; it\u2019s just that that seems pretty unlikely if you\u2019re talking about anything remotely complicated. It would be hard to believe that such an unlikely-seeming thing had really happened without the toughest critics getting the chance to do their worst.)<\/p>\n<p>\u201cKnowing that (someone) wants the truth\u201d is a particularly poor filter, because people who think that they have strong criticisms of your ideas <a href=\"https:\/\/www.lesswrong.com\/posts\/iThwqe3yPog56ytyq\/aiming-for-convergence-is-like-discouraging-betting\">are particularly likely to think that you don\u2019t want the truth<\/a>. (Because, the reasoning would go, if you did want the truth, why would you propose such flawed ideas, instead of independently inventing the obvious-to-them criticism yourself and dropping the idea without telling anyone?) Refusing to talk to people who think that they have strong criticisms of your ideas is a bad thing to do if you care about your ideas being correct.<\/p>\n<p>The selection effect is especially bad in situations where the fact that someone doesn\u2019t want the truth is relevant to the correct answer. Suppose Goofusia proposes that the salon buys cookies from a certain bakery\u2014which happens to be owned by Goofusia\u2019s niece. If Goofusia\u2019s proposal was motivated by nepotism, that\u2019s <a href=\"https:\/\/www.lesswrong.com\/posts\/y4bkJTtG3s5d6v36k\/stupidity-and-dishonesty-explain-each-other-away\">probabilistically relevant<\/a> to evaluating the quality of the proposal. (If the salon members aren\u2019t omniscient at evaluating bakery quality on the merits, then they can be deceived by recommendations made for reasons other than the merits.) The salon can debate back and forth about the costs and benefits of spending the salon\u2019s snack budget at the niece\u2019s bakery, but if no one present is capable of thinking \u201cMaybe Goofusia is being nepotistic\u201d (because anyone who could think that would never be invited to Goofusia\u2019s salon), that bodes poorly for the salon\u2019s prospects of understanding the true cost\u2013benefit landscape of catering options.<\/p>\n<h2 id=\"filtering-information-sources\">Filtering Information Sources<\/h2>\n<blockquote><p><strong>Goofusia<\/strong>: One shouldn\u2019t have to be the sort of person who follows discourse in crappy filter-bubbles in order to understand what\u2019s happening. The Rev.&nbsp;Samuel Parris\u2019s news summary roundups are the sort of thing that lets me do that. Our salon should work like that if it\u2019s going to talk about the atheist threat and the witchcraft crisis. I don\u2019t want to have to read the awful corners of the internet where this is discussed all day. They do truthseeking far worse there.<\/p>\n<p><strong>Gallantina<\/strong>: But then you\u2019re turning your salon into a Rev.&nbsp;Parris filter bubble. Don\u2019t you want your salon members to be well-read? Are you trying to save time, or are you worried about being contaminated by ideas that haven\u2019t been processed and vetted by Rev.&nbsp;Parris?<\/p><\/blockquote>\n<p>This one is subtle, too. If Goofusia is busy and just doesn\u2019t have time to keep up with what the world is saying about atheism and witchcraft, it might very well make sense to delegate her information gathering to Rev.&nbsp;Parris. That way, she can get the benefits of being mostly up to speed on these issues without having to burn too many precious hours that could be spent studying more important things.<\/p>\n<p>The problem is that the suggestion doesn\u2019t seem to be <em>about<\/em> personal time-saving. Rev.&nbsp;Parris is only one person; even if he tries to make his roundups reasonably comprehensive, he can\u2019t help but omit information in ways that reflect his own biases. (For he is presumably not perfectly free of bias, and if he didn\u2019t omit anything, there would be no time-saving value to his subscribers in being able to just read the roundup rather than having to read everything that Rev.&nbsp;Parris reads.) If some salon members are less busy than Goofusia and can afford to do their own varied primary source reading rather than delegating it all to Rev.&nbsp;Parris, Goofusia should welcome that\u2014but instead, she seems to be suspicious of those who would \u201cbe the sort of person\u201d who does that. Why?<\/p>\n<p>The admonition that \u201cThey do truthseeking far worse there\u201d is a tell. The implication seems to be that good truthseekers should prefer to only read material by other good truthseekers. Rev.&nbsp;Parris isn\u2019t just saving his subscribers time; he\u2019s protecting them from contamination, heroically taking up the burden of extracting information out of the dangerous ravings of non-truthseekers.<\/p>\n<p>But it\u2019s not clear why such a risk of contamination should exist. Part of the timeless ideal of being well-read is that you\u2019re not supposed to believe everything you read. If I\u2019m such a good truthseeker, then I should want to read everything I can about the topics I\u2019m seeking the truth about. If the authors who publish such information aren\u2019t such good truthseekers as I am, I should take that into account when performing updates on the evidence they publish, rather than denying myself the evidence.<\/p>\n<p>Information is transmitted across the physical universe <a href=\"https:\/\/www.lesswrong.com\/posts\/6s3xABaXKPdFwA3FS\/what-is-evidence\">through links of cause and effect<\/a>. If Mr.&nbsp;Proctor is clear-sighted and reliable, then when he reports seeing a witch, I infer that there probably was a witch. If the correlation across possible worlds is strong enough\u2014if I think Mr.&nbsp;Proctor reports witches when there are witches, and not when there aren\u2019t\u2014then Mr.&nbsp;Proctor\u2019s word is almost as good as if I\u2019d seen the witch myself. If Mr.&nbsp;Corey has poor eyesight and is of a less reliable character, I am less credulous about reported witch sightings from him, but if I don\u2019t face any particular time constraints, I\u2019d still rather hear Mr.&nbsp;Corey\u2019s testimony, because the value of information to a Bayesian reasoner is always nonnegative. For example, Mr.&nbsp;Corey\u2019s report could corroborate information from other sources, even if it wouldn\u2019t be definitive on its own. (Even the fact that people sometimes lie doesn\u2019t fundamentally change the calculus, <a href=\"https:\/\/www.lesswrong.com\/posts\/YptSN8riyXJjJ8Qp8\/maybe-lying-can-t-exist\">because the possibility of deception can be probabilistically \u201cpriced in\u201d<\/a>.)<\/p>\n<p>That\u2019s the theory, anyway. A potential reason to fear contamination from less-truthseeking sources is that perhaps the Bayesian ideal is too hard to practice and salon members are too prone to believe what they read. After all, many news sources have been adversarially optimized to corrupt and control their readers and make them less sane by seeing the world through ungrounded lenses.<\/p>\n<p>But the means by which such sources manage to control their readers is precisely by capturing their trust and convincing them that they shouldn\u2019t want to read the awful corners of the internet where they do truthseeking far worse than here. Readers who have mastered <em>multiple<\/em> ungrounded lenses and can check them against each other can\u2019t be owned like that. If you can spare the time, being well-read is a more robust defense against the risk of getting caught in a bad filter bubble, than trying to find a good filter bubble and blocking all (presumptively malign) outside sources of influence. All the bad bubbles have to look good from the inside, too, or they wouldn\u2019t exist.<\/p>\n<p>To some, the risk of being in a bad bubble that looks good may seem too theoretical or paranoid to take seriously. It\u2019s not like there are no objective indicators of filter quality. In analogy, the observation that dreaming people don\u2019t know that they\u2019re asleep, probably doesn\u2019t make you worry that you might be asleep and dreaming right now.<\/p>\n<p>But it being obvious that you\u2019re not in one of the worst bubbles shouldn\u2019t give you much comfort. There are still selection effects on what information gets to you, if for no other reason that there aren\u2019t enough good truthseekers in the world to uniformly cover all the topics that a truthseeker might want to seek truth about. The sad fact is that people who write about atheism and witchcraft are disproportionately likely to be atheists or witches themselves, and therefore non-truthseeking. If your faith in truthseeking is so weak that you can\u2019t even risk hearing what non-truthseekers have to say, that necessarily limits your ability to predict and intervene on a world in which atheists and witches are real things in the physical universe that can do real harm (where you need to be able to model the things in order to figure out which interventions will reduce the harm).<\/p>\n<h2 id=\"suppressing-information-sources\">Suppressing Information Sources<\/h2>\n<blockquote><p><strong>Goofusia<\/strong>: I caught Goody Osborne distributing pamphlets quoting the honest and candid and vulnerable reflections of Rev.&nbsp;Parris on guiding his flock, and just trying to somehow twist that into maximum anger and hatred. It seems quite clear to me what\u2019s going on in that pamphlet, and I think signal-boosting it is a pretty clear norm violation in my culture.<\/p>\n<p><strong>Gallantina<\/strong>: I read that pamphlet. It seemed like intellectually substantive satire of a public figure. If you missed the joke, it was making fun of an alleged tendency in Rev.&nbsp;Parris\u2019s sermons to contain sophisticated analyses of the causes of various social ills, and then at the last moment, veer away from the uncomfortable implications and blame it all on witches. If it\u2019s a norm violation to signal-boost satire of public figures, that\u2019s artificially making it harder for people to know about flaws in the work of those public figures.<\/p><\/blockquote>\n<p>This one is worse. Above, when Goofusia filtered who she talks to and what she reads for bad reasons, she was in an important sense only hurting herself. Other salon members who aren\u2019t sheltering themselves from information are unaffected by Goofusia\u2019s preference for selective ignorance, and can expect to defeat Goofusia in public debate if the need arises. The system as a whole is self-correcting.<\/p>\n<p>The invocation of \u201cnorm violations\u201d changes everything. Norms depend on collective enforcement. Declaring something a norm violation is much more serious than saying that you disagree with it or don\u2019t like it; it\u2019s expressing an intent to wield social punishment in order to maintain the norm. Merely bad ideas can be criticized, but ideas that are norm-violating to signal-boost are presumably not even to be seriously discussed. (Seriously discussing a work is signal-boosting it.) Norm-abiding group members are required to be ignorant of their details (or act as if they\u2019re ignorant).<\/p>\n<p>Mandatory ignorance of anything seems bad for truthseeking. What is Goofusia thinking here? Why would this seem like a good idea to someone?<\/p>\n<p>At a guess, the \u201cmaximum anger and hatred\u201d description is load-bearing. Presumably the idea is that it\u2019s okay to calmly and politely criticize Rev.&nbsp;Parris\u2019s sermons; it\u2019s only sneering or expressing anger or hatred that is forbidden. If the salon\u2019s speech code only targets form and not content, the reasoning goes, then there\u2019s no risk of the salon missing out on important content.<\/p>\n<p>The problem is that the line between form and content is blurrier than many would prefer to believe, because words mean things. You can\u2019t just swap in non-angry words for angry words without changing the meaning of a sentence. Maybe the distortion of meaning introduced by substituting nicer words is small, but then again, maybe it\u2019s large: the only person in a position to say is the author. People don\u2019t express anger and hatred for no reason. When they do, it\u2019s because they have reasons to think something is so bad that it deserves their anger and hatred. Are those good reasons or bad reasons? If it\u2019s norm-violating to talk about it, we\u2019ll never know.<\/p>\n<p>Unless applied with the utmost stringent standards of evenhandedness and integrity, censorship of form quickly morphs into censorship of content, as heated criticism of the ingroup is construed as norm-violating, while equally heated criticism of the outgroup is unremarkable and passes without notice. It\u2019s <a href=\"https:\/\/en.wikipedia.org\/wiki\/Emotive_conjugation\">one of those irregular verbs<\/a>: I criticize; you sneer; she somehow twists into maximum anger and hatred.<\/p>\n<p>The conjunction of \u201csomehow\u201d and \u201cit seems quite clear to me what\u2019s going on\u201d is a tell. If it were <em>actually<\/em> clear to Goofusia what was going on with the pamphlet author expressing anger and hatred towards Rev.&nbsp;Parris, she would not use the word \u201csomehow\u201d in describing the author\u2019s behavior: she would be able to pass the author\u2019s <a href=\"https:\/\/www.econlib.org\/archives\/2011\/06\/the_ideological.html\">ideological Turing test<\/a> and therefore know exactly how.<\/p>\n<p>If that were just Goofusia\u2019s mistake, the loss would be hers alone, but if Goofusia is in a position of social power over others, she might succeed at spreading her anti-speech, anti-reading cultural practices to others. I can only imagine that the result would be a subculture that was obsessively self-congratulatory about its own superiority in \u201ctruthseeking\u201d, while simultaneously blind to everything outside itself. People spending their lives immersed in that culture wouldn\u2019t necessarily notice anything was wrong from the inside. What could you say to help them?<\/p>\n<h2 id=\"an-analogy-to-reinforcement-learning-from-human-feedback\">An Analogy to Reinforcement Learning From Human Feedback<\/h2>\n<p>Pointing out problems is easy. Finding solutions is harder.<\/p>\n<p>The training pipeline for frontier AI systems typically includes a final step called reinforcement learning from human feedback (RLHF). After training a \u201cbase\u201d language model that predicts continuations of internet text, supervised fine-tuning is used to make the model respond in the form of an assistant answering user questions, but making the assistant responses good is more work. It would be expensive to hire a team of writers to manually compose the thousands of user-question\u2013assistant-response examples needed to teach the model to be a good assistant. The solution is RLHF: a reward model (often just the same language model with a different final layer) is trained to predict the judgments of human raters about which of a pair of model-generated assistant responses is better, and the model is optimized against the reward model.<\/p>\n<p>The problem with the solution is that human feedback (and the reward model\u2019s prediction of it) is imperfect. The reward model <a href=\"https:\/\/www.lesswrong.com\/posts\/xFotXGEotcKouifky\/worlds-where-iterative-design-fails\">can\u2019t tell the difference<\/a> between \u201cThe AI is being good\u201d and \u201cThe AI looks good to the reward model\u201d. This already has the failure mode of sycophancy, in which today\u2019s language model assistants tell users what they want to hear, but theory and <a href=\"https:\/\/www.lesswrong.com\/posts\/njAZwT8nkHnjipJku\/alignment-faking-in-large-language-models\">preliminary experiments<\/a> suggest that much larger harms (up to and including human extinction) could materialize from future AI systems deliberately deceiving their overseers\u2014not because they suddenly \u201cwoke up\u201d and defied their training, but because what we <em>think<\/em> we trained them to do (be helpful, honest, and harmless) isn\u2019t what we actually trained them to do (perform whatever computations were the antecedents of reward on the training distribution).<\/p>\n<p>The problem doesn\u2019t have any simple, obvious solution. In the absence of some sort of international treaty to halt all AI development worldwide, \u201cJust don\u2019t do RLHF\u201d isn\u2019t feasible and doesn\u2019t even make any sense; you need some sort of feedback in order to make an AI that does anything useful at all.<\/p>\n<p>The problem <a href=\"https:\/\/www.lesswrong.com\/posts\/vwu4kegAEZTBtpT6p\/thoughts-on-the-impact-of-rlhf-research\">may or may not ultimately be solvable<\/a> with some sort of complicated, nonobvious solution that tries to improve on na\u00efve RLHF. Researchers are hard at work studying alternatives involving <a href=\"https:\/\/arxiv.org\/abs\/2209.07858\">red-teaming<\/a>, <a href=\"https:\/\/arxiv.org\/abs\/1805.00899\">debate<\/a>, <a href=\"https:\/\/arxiv.org\/abs\/2602.10067\">interpretability<\/a>, <a href=\"https:\/\/www.lesswrong.com\/posts\/n7DFwtJvCzkuKmtbG\/a-gentle-introduction-to-mechanistic-anomaly-detection\">mechanistic anomaly detection<\/a>, and more.<\/p>\n<p>But the first step on the road to some future complicated solution to the problem of na\u00efve RLHF, is acknowledging that the the problem is at least potentially real, and having some respect that the problem might be difficult, rather than just eyeballing the results of RLHF and saying that it looks great.<\/p>\n<p>If a safety auditor comes to the CEO of an AI company expressing concerns about the company\u2019s RLHF pipeline being unsafe due to imperfect rater feedback, it\u2019s more reassuring if the CEO says, \u201cYes, we thought of that, too; we\u2019ve implemented these-and-such mitigations and are monitoring such-and-these signals which we hope will clue us in if the mitigations start to fail.\u201d<\/p>\n<p>If the CEO instead says, \u201cWell, <em>I<\/em> think our raters are great. Are you insulting our raters?\u201d, that does not inspire confidence. The natural inference is that the CEO is mostly interested in this quarter\u2019s profits and doesn\u2019t really care about safety.<\/p>\n<p>Similarly, the problem with selection effects on approved information, in which your salon can\u2019t tell the difference between \u201cOur ideas are good\u201d and \u201cOur ideas look good to us,\u201d doesn\u2019t have any simple, obvious solution. \u201cJust don\u2019t filter information\u201d isn\u2019t feasible and doesn\u2019t even make any sense; you need some sort of filter because it\u2019s not physically possible to read everything and respond to everything.<\/p>\n<p>The problem may or may not ultimately be solvable with some complicated solution involving prediction markets, adversarial collaborations, anonymous criticism channels, or any number of other mitigations I haven\u2019t thought of, but the first step on the road to some future complicated solution is acknowledging that the problem is at least potentially real, and having some respect that the problem might be difficult. If alarmed members come to the organizers of the salon with concerns about collective belief distortions due to suppression of information and the organizers meet them with silence, \u201cbowing out\u201d, or defensive blustering, rather than \u201cYes, we thought of that, too,\u201d that does not inspire confidence. The natural inference is that the organizers are mostly interested in maintaining the salon\u2019s prestige and don\u2019t really care about the truth.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In a busy, busy world, there\u2019s so much to read that no one could possibly keep up with it all. You can\u2019t not prioritize what you pay attention to and (even more so) what you respond to. Everyone and her &hellip; <a href=\"http:\/\/zackmdavis.net\/blog\/2026\/02\/hazards-of-selection-effects-on-approved-information\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[13],"tags":[98,27],"_links":{"self":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts\/2467"}],"collection":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/comments?post=2467"}],"version-history":[{"count":1,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts\/2467\/revisions"}],"predecessor-version":[{"id":2468,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/posts\/2467\/revisions\/2468"}],"wp:attachment":[{"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/media?parent=2467"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/categories?post=2467"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/zackmdavis.net\/blog\/wp-json\/wp\/v2\/tags?post=2467"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}