Should I Finish My Bachelor's Degree?

To some, it might seem like a strange question. If you think of being college-educated as a marker of class (or personhood), the fact that I don’t have a degree at age of thirty-six (!!) probably looks like a scandalous anomaly, which it would be only natural for me to want to remediate at the earliest opportunity.

I deeply resent that entire worldview—not because I’ve rejected education, properly understood. On the contrary. The study of literature, history, mathematics, science—these things are among the noblest pursuits in life, sources of highest pleasure and deepest meaning. It’s precisely because I value education so much that I can’t stand to see it conflated with school and its culture of bureaucratic servitude where no one cares what you know and no one cares what you can do; they just want you to sit in a room and obey the commands of the designated teacher. Whereas in reality, knowledge doesn’t come from “taking courses.”

How could it? Knowledge comes from quality study and practice. Sure, it’s possible that someone could study in order to “pass” a “class” that they’re “taking” in school. But once you know how and why to study, it’s not clear what value the school is adding that can’t be gotten better, cheaper, elsewhere. Just get the books. (And start a blog, go to meetups, chat to large language models, hire a private tutor—whatever makes sense to get better at doing the things you want to do, without having to worry about whether the thing that makes sense can be made legible to distant bureaucrats.)

The people who believe in being college-educated probably don’t believe me. They probably think my pæans to the glory of self-study are the rationalizations of a lazy student who doesn’t want to work hard.

I can understand some reasons for skepticism. Sometimes people really are lazy, and suffer from self-serving delusions. Probably there are some confused people out there who have mistaken consumer edutainment for production scholarship and—maybe, somehow—could benefit from being set straight by the firm tutelage of the standard bureaucratic authority.

But without vouching for everyone who calls themself an autodidact, I think I can present third-party-visible evidence that my self-study is for real? I worked as a software engineer for eight years; I have 173 commits in the Rust compiler; I wrote a chess engine; I’ve blogged 400,000 words over the past dozen years on topics from mathematics and machine learning, to formal epistemology and the philosophy of language, to politics and differential psychology, and much more.

This is not the portfolio of an uneducated person. If someone is considering working with me and isn’t sure of my competence, they’re welcome to look at my output and judge for themselves. (And I’m happy to take a test when that makes sense.) If someone would otherwise consider working with me, but are put off by the lack of a mystical piece of paper from the standard bureaucratic authority, that’s their loss—maybe I don’t want to work with someone with so little discernment.


If I believe everything I just wrote, explaining why I have nothing particularly to gain and nothing particularly to prove by jumping through a few more hoops to get the mystical piece of paper, then … why am I considering it?

One possible answer is that it passes a cost–benefit analysis mostly by virtue of the costs being low, rather than the benefits being particularly high. I’m at a time in my life where I have enough money from my previous dayjob and enough uncertainty about how long the world is going to last, that I prefer having lots of free time to work on things that interest me or add dignity to the existential risk situation, than to continue grinding at software dayjobs. So if my schedule isn’t being constrained by a dayjob for now, why not “take” some “classes” and finish off the mystical piece of paper? Continuing from where I left off in 2013 due to being rescued by the software industry, I need five more math courses and three more gen-eds to finish a B.A. in math at San Francisco State University, which I can knock out in two semesters. The commute is terrible, but I can choose my schedule to only be on campus a couple days a week. And then if it makes sense to go get another dayjob later, “I finished my Bachelor’s degree” is a legible résumé-gap excuse (easier to explain to semi-normies with hiring authority than “I finished my 80,000-word memoir of religious betrayal”).

In short, why not?—if I’m going to do it ever, now is a convenient time, and eight classes is a sufficiently small cost that it makes sense to do it ever (conditional on the world not ending immediately).

A less comfortable possible answer is that maybe I do have something to prove.

I often wonder why I seem to be so alone in my hatred of school as an intellectual. The people who are smart enough to do well in school are presumably also smart enough to have intellectual lives outside of school. Why do people put up with it? Why is there a presumption that there must be something wrong with someone who didn’t finish the standard course?

I think part of the answer is that, separately from whether the standard course makes sense as a class or personhood marker, once the signaling regime has been established, it’s mostly true that people who don’t finish the standard course probably have something wrong with them.

Separately from the fact that I’m obviously right that my personal passion projects are more intellectually meritorious than the busywork school demanded of me, there’s also something wrong with me. My not finishing the first time at UC Santa Cruz (expected class of 2010) wasn’t just a matter of opportunity costs. I also had obscure psychological problems unrelated to my intellectual ability to do the work, which were particularly triggered by the school environment (and thankfully aren’t triggered by software industry employment relations). Someone with my talents who wasn’t crazy probably would have arranged to finish on time for pragmatic reasons (notwithstanding the injustice of the whole system).

This makes it slightly less confusing that the system hasn’t been overthrown. It’s not that school somehow has a monopoly on learning itself. It’s that people who are good at learning mostly don’t have problems getting the mystical piece of paper granting them legal and social privileges, and therefore don’t have a chip on their shoulder about not having it.

If that were the entirety of the matter, it wouldn’t present a sufficient reason for me to finish. There would be be little point in proving to anyone that I’ve outgrown my youthful mental health problems by showing that I can endure the same abuses as everyone else, when anything I might want to prove to someone is proven better by my history of making real things in the real world (code that profitable businesses pay for, blog posts that people want to read of their own volition).

But it gets worse. It may just be possible that I have something prove intellectually, not just psychologically. In 2010, after studying math on my own for a couple years (having quit the University at Santa Cruz in 2007), I enrolled in a differential equations class at the local community college, expecting to do well and validate the glory of my self-study. I was actually interested in math. Surely that would put me at an advantage over ordinary community college students who only knew how to do as they were told?

In fact, I did poorly, scraping by with a C. No doubt the people who believe in being college-educated will take this as proof of their worldview that nothing of intellectual value happens outside of schools, that anyone who thinks they learned something from a book that wasn’t assigned by their officially designated instructor is only deluding themselves.

Ultimately, I don’t think this is the correct moral. (If a poor performance in that one class counts as evidence against the hypothesis that I know what I’m doing, then good or dominant performances elsewhere—including in other school math classes—count as evidence for; a full discussion of the exact subskill deficits leading to my differential equations debacle is beyond the scope of this post.)

But even if the people who believe in being college-educated are ultimately wrong, I’m haunted by the fact they’re not obviously wrong. The fact that my expectations were so miscalibrated about the extent to which my being “into math” would easily convert into proficiency at finicky differential equations computations makes it less credible to just point at my work online and say, “Come on, I’m obviously the equal of your standard STEM graduate, even if I don’t have the mystical piece of paper.”

If that were the entirety of the matter, it still wouldn’t present a sufficient reason for me to finish. Desperately trying to prove one’s worth to the image of an insensible Other is just no way to live. When I was at SF State in 2012 (having endured the constant insults of three-plus semesters of community college, and my father being unwilling to pay for me to go back to Santa Cruz), it was for the perceived lack of other opportunities—and I was miserable, wondering when would my life begin. Whatever resources the university might have offered towards my genuine intellectual ambitions were tainted by the bitterness that I mostly wasn’t there to learn math; I was there because I felt coerced into proving that I could join the ranks of the college educated.

But now that I’ve earned some of my own money (and for unrelated reasons feel like my life is over rather than waiting to begin), the relative balance of motivations has shifted. Getting the mystical piece of paper is still a factor, but now that it feels like I have a real choice, I think I can seek advantage in the situation with less bitterness.

It helps that I only have a few “general education” requirements left, which I experience as insulting obedience tests that are wholly inferior to my free reading and blogging, regardless of the quality of the professor. In contrast, I can regard some upper-division math classes as a worthy challenge. (Yes, even at SFSU. I am not very intelligent.) Learning math is hard and expensive: I can see how it makes sense to organize a coordinated “class” in which everyone is studying the same thing, with assignments and tests for feedback and calibration. It doesn’t seem like a betrayal of the divine to want to experience meeting that external standard with pride—now that I’m less crazy, now that I have a real choice, now that my life is otherwise over anyway. I’m not committed yet (the admissions office is supposed to get back to me), but I’m currently leaning towards doing it.

"Deep Learning" Is Function Approximation

A Surprising Development in the Study of Multi-layer Parameterized Graphical Function Approximators

As a programmer and epistemology enthusiast, I’ve been studying some statistical modeling techniques lately! It’s been boodles of fun, and might even prove useful in a future dayjob if I decide to pivot my career away from the backend web development roles I’ve taken in the past.

More specifically, I’ve mostly been focused on multi-layer parameterized graphical function approximators, which map inputs to outputs via a sequence of affine transformations composed with nonlinear “activation” functions.

(Some authors call these “deep neural networks” for some reason, but I like my name better.)

It’s a curve-fitting technique: by setting the multiplicative factors and additive terms appropriately, multi-layer parameterized graphical function approximators can approximate any function. For a popular choice of “activation” rule which takes the maximum of the input and zero, the curve is specifically a piecewise-linear function. We iteratively improve the approximation f(x, θ) by adjusting the parameters θ in the direction of the derivative of some error metric on the current approximation’s fit to some example input–output pairs (x, y), which some authors call “gradient descent” for some reason. (The mean squared error (f(x, θ) − y)² is a popular choice for the error metric, as is the negative log likelihood −log P(y | f(x, θ)). Some authors call these “loss functions” for some reason.)

Basically, the big empirical surprise of the previous decade is that given a lot of desired input–output pairs (x, y) and the proper engineering know-how, you can use large amounts of computing power to find parameters θ to fit a function approximator that “generalizes” well—meaning that if you compute ŷ = f(x, θ) for some x that wasn’t in any of your original example input–output pairs (which some authors call “training” data for some reason), it turns out that ŷ is usually pretty similar to the y you would have used in an example (x, y) pair.

It wasn’t obvious beforehand that this would work! You’d expect that if your function approximator has more parameters than you have example input–output pairs, it would overfit, implementing a complicated function that reproduced the example input–output pairs but outputted crazy nonsense for other choices of x—the more expressive function approximator proving useless for the lack of evidence to pin down the correct approximation.

And that is what we see for function approximators with only slightly more parameters than example input–output pairs, but for sufficiently large function approximators, the trend reverses and “generalization” improves—the more expressive function approximator proving useful after all, as it admits algorithmically simpler functions that fit the example pairs.

The other week I was talking about this to an acquaintance who seemed puzzled by my explanation. “What are the preconditions for this intuition about neural networks as function approximators?” they asked. (I paraphrase only slightly.) “I would assume this is true under specific conditions,” they continued, “but I don’t think we should expect such niceness to hold under capability increases. Why should we expect this to carry forward?”

I don’t know where this person was getting their information, but this made zero sense to me. I mean, okay, when you increase the number of parameters in your function approximator, it gets better at representing more complicated functions, which I guess you could describe as “capability increases”?

But multi-layer parameterized graphical function approximators created by iteratively using the derivative of some error metric to improve the quality of the approximation are still, actually, function approximators. Piecewise-linear functions are still piecewise-linear functions even when there are a lot of pieces. What did you think it was doing?

Multi-layer Parameterized Graphical Function Approximators Have Many Exciting Applications

To be clear, you can do a lot with function approximation!

For example, if you assemble a collection of desired input–output pairs (x, y) where the x is an array of pixels depicting a handwritten digit and y is a character representing which digit, then you can fit a “convolutional” multi-layer parameterized graphical function approximator to approximate the function from pixel-arrays to digits—effectively allowing computers to read handwriting.

Such techniques have proven useful in all sorts of domains where a task can be conceptualized as a function from one data distribution to another: image synthesis, voice recognition, recommender systems—you name it. Famously, by approximating the next-token function in tokenized internet text, large language models can answer questions, write code, and perform other natural-language understanding tasks.

I could see how someone reading about computer systems performing cognitive tasks previously thought to require intelligence might be alarmed—and become further alarmed when reading that these systems are “trained” rather than coded in the manner of traditional computer programs. The summary evokes imagery of training a wild animal that might turn on us the moment it can seize power and reward itself rather than being dependent on its masters.

But “training” is just a suggestive name. It’s true that we don’t have a mechanistic understanding of how function approximators perform tasks, in contrast to traditional computer programs whose source code was written by a human. It’s plausible that this opacity represents grave risks, if we create powerful systems that we don’t know how to debug.

But whatever the real risks are, any hope of mitigating them is going to depend on acquiring the most accurate possible understanding of the problem. If the problem is itself largely one of our own lack of understanding, it helps to be specific about exactly which parts we do and don’t understand, rather than surrendering the entire field to a blurry aura of mystery and despair.

An Example of Applying Multi-layer Parameterized Graphical Function Approximators in Success-Antecedent Computation Boosting

One of the exciting things about multi-layer parameterized graphical function approximators is that they can be combined with other methods for the automation of cognitive tasks (which is usually called “computing”, but some authors say “artificial intelligence” for some reason).

In the spirit of being specific about exactly which parts we do and don’t understand, I want to talk about Mnih et al. 2013’s work on getting computers to play classic Atari games (like Pong, Breakout, or Space Invaders). This work is notable as one of the first high-profile examples of using multi-layer parameterized graphical function approximators in conjunction with success-antecedent computation boosting (which some authors call “reinforcement learning” for some reason).

If you only read the news—if you’re not in tune with there being things to read besides news—I could see this result being quite alarming. Digital brains learning to play video games at superhuman levels from the raw pixels, rather than because a programmer sat down to write an automation policy for that particular game? Are we not already in the shadow of the coming race?

But people who read textbooks and not just news, being no less impressed by the result, are often inclined to take a subtler lesson from any particular headline-grabbing advance.

Mnih et al.’s Atari result built off the technique of Q-learning introduced two decades prior. Given a discrete-time present-state-based outcome-valued stochastic control problem (which some authors call a “Markov decision process” for some reason), Q-learning concerns itself with defining a function Q(s, a) that describes the value of taking action a while in state s, for some discrete sets of states and actions. For example, to describe the problem faced by an policy for a grid-based video game, the states might be the squares of the grid, and the available actions might be moving left, right, up, or down. The Q-value for being on a particular square and taking the move-right action might be the expected change in the game’s score from doing that (including a scaled-down expectation of score changes from future actions after that).

Upon finding itself in a particular state s, a Q-learning policy will usually perform the action with the highest Q(s, a), “exploiting” its current beliefs about the environment, but with some probability it will “explore” by taking a random action. The predicted outcomes of its decisions are compared to the actual outcomes to update the function Q(s, a), which can simply be represented as a table with as many rows as there are possible states and as many columns as there are possible actions. We have theorems to the effect that as the policy thoroughly explores the environment, it will eventually converge on the correct Q(s, a).

But Q-learning as originally conceived doesn’t work for the Atari games studied by Mnih et al., because it assumes a discrete set of possible states that could be represented with the rows in a table. This is intractable for problems where the state of the environment varies continuously. If a “state” in Pong is a 6-tuple of floating-point numbers representing the player’s paddle position, the opponent’s paddle position, and the x- and y-coordinates of the ball’s position and velocity, then there’s no way for the traditional Q-learning algorithm to base its behavior on its past experiences without having already seen that exact conjunction of paddle positions, ball position, and ball velocity, which almost never happens. So Mnih et al.’s great innovation was—

(Wait for it …)

—to replace the table representing Q(s, a) with a multi-layer parameterized graphical function approximator! By approximating the mapping from state–action pairs to discounted-sums-of-“rewards”, the “neural network” allows the policy to “generalize” from its experience, taking similar actions in relevantly similar states, without having visited those exact states before. There are a few other minor technical details needed to make it work well, but that’s the big idea.

And understanding the big idea probably changes your perspective on the headline-grabbing advance. (It certainly did for me.) “Deep learning is like evolving brains; it solves problems and we don’t know how” is an importantly different story from “We swapped out a table for a multi-layer parameterized graphical function approximator in this specific success-antecedent computation boosting algorithm, and now it can handle continuous state spaces.”

Risks From Learned Approximation

When I solicited reading recommendations from people who ought to know about risks of harm from statistical modeling techniques, I was directed to a list of reputedly fatal-to-humanity problems, or “lethalities”.

Unfortunately, I don’t think I’m qualified to evaluate the list as a whole; I would seem to lack some necessary context. (The author keeps using the term “AGI” without defining it, and adjusted gross income doesn’t make sense in context.)

What I can say is that when the list discusses the kinds of statistical modeling techniques I’ve been studying lately, it starts to talk funny. I don’t think someone who’s been reading the same textbooks as I have (like Prince 2023 or Bishop and Bishop 2024) would write like this:

Even if you train really hard on an exact loss function, that doesn’t thereby create an explicit internal representation of the loss function inside an AI that then continues to pursue that exact loss function in distribution-shifted environments. Humans don’t explicitly pursue inclusive genetic fitness; outer optimization even on a very exact, very simple loss function doesn’t produce inner optimization in that direction. […] This is sufficient on its own […] to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.

To be clear, I agree that if you fit a function approximator by iteratively adjusting its parameters in the direction of the derivative of some loss function on example input–output pairs, that doesn’t create an explicit internal representation of the loss function inside the function approximator.

It’s just—why would you want that? And really, what would that even mean? If I use the mean squared error loss function to approximate a set of data points in the plane with a line (which some authors call a “linear regression model” for some reason), obviously the line itself does not somehow contain a representation of general squared-error-minimization. The line is just a line. The loss function defines how my choice of line responds to the data I’m trying to approximate with the line. (The mean squared error has some elegant mathematical properties, but is more sensitive to outliers than the mean absolute error.)

It’s the same thing for piecewise-linear functions defined by multi-layer parameterized graphical function approximators: the model is the dataset. It’s just not meaningful to talk about what a loss function implies, independently of the training data. (Mean squared error of what? Negative log likelihood of what? Finish the sentence!)

This confusion about loss functions seems to be linked to a particular theory of how statistical modeling techniques might be dangerous, in which “outer” training results in the emergence of an “inner” intelligent agent. If you expect that, and you expect intelligent agents to have a “utility function”, you might be inclined to think of “gradient descent” “training” as trying to transfer an outer “loss function” into an inner “utility function”, and perhaps to think that the attempted transfer primarily doesn’t work because “gradient descent” is an insufficiently powerful optimization method.

I guess the emergence of inner agents might be possible? I can’t rule it out. (“Functions” are very general, so I can’t claim that a function approximator could never implement an agent.) Maybe it would happen at some scale?

But taking the technology in front of us at face value, that’s not my default guess at how the machine intelligence transition would go down. If I had to guess, I’d imagine someone deliberately building an agent using function approximators as a critical component, rather than your function approximator secretly having an agent inside of it.

That’s a different threat model! If you’re trying to build a good agent, or trying to prohibit people from building bad agents using coordinated violence (which some authors call “regulation” for some reason), it matters what your threat model is!

(Statistical modeling engineer Jack Gallagher has described his experience of this debate as “like trying to discuss crash test methodology with people who insist that the wheels must be made of little cars, because how else would they move forward like a car does?”)

I don’t know how to build a general agent, but contemporary computing research offers clues as to how function approximators can be composed with other components to build systems that perform cognitive tasks.

Consider AlphaGo and its successor AlphaZero. In AlphaGo, one function approximator is used to approximate a function from board states to move probabilities. Another is used to approximate the function from board states to game outcomes, where the outcome is +1 when one player has certainly won, −1 when the other player has certainly won, and a proportionately intermediate value indicating who has the advantage when the outcome is still uncertain. The system plays both sides of a game, using the board-state-to-move-probability function and board-state-to-game-outcome function as heuristics to guide a search algorithm which some authors call “Monte Carlo tree search”. The board-state-to-move-probability function approximation is improved by adjusting its parameters in the direction of the derivative of its cross-entropy with the move distribution found by the search algorithm. The board-state-to-game-outcome function approximation is improved by adjusting its parameters in the direction of the derivative of its squared difference with the self-play game’s ultimate outcome.

This kind of design is not trivially safe. A similarly superhuman system that operated in the real world (instead of the restricted world of board games) that iteratively improved an action-to-money-in-this-bank-account function seems like it would have undesirable consequences, because if the search discovered that theft or fraud increased the amount of money in the bank account, then the action-to-money function approximator would generalizably steer the system into doing more theft and fraud.

Statistical modeling engineers have a saying: if you’re surprised by what your nerual net is doing, you haven’t looked at your training data closely enough. The problem in this hypothetical scenario is not that multi-layer parameterized graphical function approximators are inherently unpredictable, or must necessarily contain a power-seeking consequentialist agent in order to do any useful cognitive work. The problem is that you’re approximating the wrong function and get what you measure. The failure would still occur if the function approximator “generalizes” from its “training” data the way you’d expect. (If you can recognize fraud and theft, it’s easy enough to just not use that data as examples to approximate, but by hypothesis, this system is only looking at the account balance.) This doesn’t itself rule out more careful designs that use function approximators to approximate known-trustworthy processes and don’t search harder than their representation of value can support.

This may be cold comfort to people who anticipate a competitive future in which cognitive automation designs that more carefully respect human values will foreseeably fail to keep up with the frontier of more powerful systems that do search harder. It may not matter to the long-run future of the universe that you can build helpful and harmless language agents today, if your civilization gets eaten by more powerful and unfriendlier cognitive automation designs some number of years down the line. As a humble programmer and epistemology enthusiast, I have no assurances to offer, no principle or theory to guarantee everything will turn out all right in the end. Just a conviction that, whatever challenges confront us in the future, we’ll be a better position to face them by understanding the problem in as much detail as possible.


Bibliography

Bishop, Christopher M., and Andrew M. Bishop. 2024. Deep Learning: Foundations and Concepts. Cambridge, UK: Cambridge University Press. https://www.bishopbook.com/

Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. “Playing Atari with Deep Reinforcement Learning.” https://arxiv.org/abs/1312.5602

Prince, Simon J.D. 2023. Understanding Deep Learning. Cambridge, MA: MIT Press. http://udlbook.com

Sutton, Richard S. and Andrew G. Barto. 2024. Reinforcement Learning. 2nd ed. Cambridge, MA: MIT Press.

Plea Bargaining

I wish people were better at—plea bargaining, rather than pretending to be innocent. You accuse someone of [negative-valence description of trait or behavior that they're totally doing], and they say, "No, I'm not", and I'm just like ... really? How dumb do you think we are?

I think when people accuse me of [negative-valence description of trait or behavior], I'm usually more like, "Okay, I can see what you're getting at, but I actually think it's more like [different negative-valence description of trait or behavior], which I claim is a pretty reasonable thing to do given my goals and incentives."

(Because I usually can see what they're getting at! Even if their goal is just to attack me, attackers know to choose something plausible, because why would you attack someone with a charge that has no hope of sticking?)

Beauty Is Truthiness, Truthiness Beauty?

Imagine reviewing Python code that looks something like this.

has_items = items is not None and len(items) > 0
if has_items:
    ...

...
do_stuff(has_items=has_items)

You might look at the conditional, and disapprove: None and empty collections are both falsey, so there's no reason to define that has_items variable; you could just say if items:.

But, wouldn't it be weird for do_stuff's has_items kwarg to take a collection rather than a boolean? I think it would be weird: even if the function's internals can probably rely on mere truthiness rather than needing an actual boolean type for some reason, why leave it to chance?

So, maybe it's okay to define the has_items variable for the sake of the function kwarg—and, having done so anyway, to use it as an if condition.

You might object further: but, but, None and the empty collection are still both falsey. Even if we've somehow been conned into defining a whole variable, shouldn't we say has_items = bool(items) rather than spelling out is not None and len(items) > 0 like some rube (or Rubyist) who doesn't know Python?!

Actually—maybe not. Much of Python's seductive charm comes from its friendly readability ("executable pseudocode"): it's intuitive for if not items to mean "if items is empty". English, and not the formal truthiness rules, are all ye need to know. In contrast, it's only if you already know the rules that bool(items) becomes meaningful. Since we care about good code and don't care about testing the reader's Python knowledge, spelling out items is not None and len(items) > 0 is very arguably the right thing to do here.

January Is Math and Wellness Month

(Previously)

There is a time to tackle ambitious intellectual projects and go on grand political crusades, and tour the podcast circuit marketing both.

That time is not January. January is for:

  • sleeping (at the same time every night)
  • running, or long walks
  • reflecting on our obligations under the moral law
  • composing careful memoirs on our failures before the moral law (in anticipation of being court-martialed in February for crimes of December)
  • chores
  • medium-term planning
  • performing well at one's dayjob
  • studying math in the evenings
  • avoiding Twitter (starting now)
  • not using psychiatric medications like quetiapine unless the expected consequences of doing so seem better

And You Take Me the Way I Am

Mark Twain wrote that honesty means you don't have to remember anything. But it also means you don't have to worry about making mistakes.

If you said something terrible that made everyone decide that you're stupid and evil, there's no sense in futilely protesting that "that's not what you meant", or agonizing that you should have thought more carefully and said something else in order to avoid the outcome of everyone thinking that you're stupid and evil.

Strategy is deception. You said what you said in the situation you were in, and everyone else used the information in that signal as evidence for a Bayesian update about your intelligence and moral character. As they should. So what's the problem? You wouldn't want people to have false beliefs, would you!?

Coffee Is for Coders

No one cares if you're in pain;
They only want results.
Everywhere this law's the same,
In startups, schools, and cults.
A child can pull the heartstrings
Of assorted moms and voters,
But your dumb cries are all in vain,
And coffee is for coders.

No one cares how hard you tried
(Though I bet it wasn't much),
But work that can on be relied,
If not relied as such.
A kitten is forgiven
As are a broken gear or rotors,
But your dumb crimes are full of shame,
And coffee is for coders.

The Parable of the Scorpion and the Fox

In the days of auld lang syne on Earth-that-was, a scorpion was creepy-crawling along a riverbank, wondering how to get to the other side. It came across an animal that could swim: some versions of the tale say it was a fox, others report a quokka. I'm going to assume it was a fox.

So the scorpion asks the fox to take it on her back and swim across the river. What does the fox say? She says, "No." The scorpion says, "If this is because you're afraid I'll sting you with my near-instantly-fatal toxins, don't worry—if I did that, then we'd likely both drown. By backwards induction, you're safe." What does the fox say? After pondering for a few moments, she says, "Okay."

So the scorpion gets on the fox's back, and the fox begins to swim across the river. When the pair is halfway across the river, the scorpion stings the fox.

The fox howls in pain while continuing to paddle. "Why?!" she cries. "Why did you do that?! As you said before, now we're likely to both drown."

The scorpion says, "I can't help it. It's my nature."

As the fox continues to paddle, the scorpion continues. "Interestingly, there's a very famous parable about this exact scenario. There was even an episode of Star Trek: Voyager titled after it. As a fox who knows many things, you must have heard it before. Why did you believe me?"

"I can't help it," gasped the fox, who might after all have been a quokka, as the poison filled her veins and her vision began to blur and her paddling began to slow. "It's my nature."

Blogging on Less Wrong 2020 (Upper Half)

Relationship Outcomes Are Not Particularly Sensitive to Small Variations in Verbal Ability

After a friendship-ending fight, you feel an impulse to push through the pain to do an exhaustive postmortem of everything you did wrong in that last, fatal argument—you could have phrased that more eloquently, could have anticipated that objection, could have not left so much "surface area" open to that class of rhetorical counterattack, could have been more empathetic on that one point, could have chosen a more-fitting epigraph, could have taken more time to compose your reply and squeeze in another pass's worth of optimizations—as if searching for some combination of variables that would have changed the outcome, some nearby possible world where the two of you are still together.

No solution exists. (Or is findable in polynomial time.) The causal forces that brought you to this juncture are multitudinous and complex. A small change in the initial conditions only corresponds to a small change in the outcome; you can't lift a two-ton weight with ten pounds of force.

Not all friendship problems are like this. Happy endings do exist—to someone else's story in someone else's not-particularly-nearby possible world. Not for you, not here, not now.

Feature Reduction

(looking at baby/toddler photos a year apart) "How does he look so different and yet so the same at the same time?"

"Just in case that was non-rhetorical, the answer is that your brain evolved to be good at factorizing overall appearance into orthogonal 'personal appearance' and 'age appearance' dimensions that can be tracked separately, just as [x, y] = [1, 2] and [4, 2] are so different with respect to x, and yet so the same with respect to y, at the same time."