Arc Forumnew | comments | leaders | submitlogin
3 points by shader 1465 days ago | link | parent

> ...we'll see over time whether coming up with the right abstractions is a sufficient approach

What other approaches are you considering? And sufficient for what purpose?

> it's not at all inevitable that the right abstractions will win out

I addressed this more thoroughly in my other wall of text, but I don't see why "winning" is a necessary objective. If you have an abstraction that works, it won't suddenly disappear on you. Or maybe I don't understand. "Win out" over what? And by what standard of success?

> We have to very carefully set the initial conditions to ensure the paths to them aren't prematurely discarded.

That's a slightly different consideration, trying to pick initial conditions that result in better abstractions. Honestly, it sounds like premature optimization and harder than the halting problem. In some ways though I think the general principles I mentioned are precisely intended to address this challenge. Basically, whenever you come to a design decision where you don't know what the right answer is, or if there might be more than one, don't hide the complexity but pass it on as flexibility to the user. The system stays correct and transparent without lying about how it works, and the user doesn't lose any options. This approach seems to satisfy your desire to not prematurely discard paths.

> The analogy with math is misleading here, because math doesn't have syscalls

My reference to math was not an analogy, but a description of an approach to abstractions. In Mathematics, abstractions are held to rigorous standards: they must actually be proven to behave as claimed, any exceptions must be included in the definition, and any application outside of the assumptions is invalid or at least highly suspect and must be justified.

> syscalls let us do fairly arbitrary things. Given this much power, "don't lie" is about as enforceable as it is with legislators.

It's not enforceable with mathematicians either; the best we can do is read eachother's work to check for such lies and avoid using flawed work. Such care is more important in mathematics where any flaws can ruin a proof, whereas in software a bug might only be reached in rare edge cases and even then we can turn it off and on again. That explains why the cost of failure is lower, and programmers don't apply the same effort to write flawless code. But what I'm proposing is not "don't let anyone lie" which sounds impossible, but "don't lie", which is a call for personal integrity. It's a design decision, in which integrity and transparency are chosen over comfort and simplicity. That decision is probably just as costly in software as it is in life, but hopefully proves just as rewarding.

> ...applying it becomes something outside of math. An externality.

That just sounds wrong to me, and needs a stronger argument. It may be a pain to write out all the preconditions and postconditions describing the whole state of the computer and environment (I certainly wouldn't recommend it), but that doesn't mean it is conceptually impossible and thus "outside math". However, thinking that way is mostly irrelevant anyway. Instead, I propose the simpler objective of not misrepresenting what could happen. "Not lying" is not the same thing as "telling the whole truth". Math uses pretty broad lower and upper bounds all the time; precise answers aren't always available, but we can still avoid claiming things we can't prove.



2 points by akkartik 1465 days ago | link

I think we're saying the same thing but misunderstanding the words used by each other. Don't get hung up on my careless use of the word "win". Like I said elsewhere, I'm not trying to genocide competing ideas :)

The rest of this thread is just a reminder to me to avoid analogies like the (ahem) plague.

-----

3 points by akkartik 1465 days ago | link

> My reference to math was not an analogy, but a description of an approach to abstractions. In Mathematics, abstractions are held to rigorous standards: they must actually be proven to behave as claimed, any exceptions must be included in the definition, and any application outside of the assumptions is invalid or at least highly suspect and must be justified.

Such abstractions are great -- if you can find them. Once in a lifetime things. Pretty much nothing we use in software comes close to this. Not even Lisp.

Not lying is not easy. You have to take responsibility for basically every misunderstanding someone may make with your ideas.

-----

3 points by shader 1464 days ago | link

> Such abstractions are great -- if you can find them. Once in a lifetime things.

Maybe we're discussing different when it comes to abstractions.

They don't have to be paradigms and approaches, like your reference Lisp as a whole. I'm not even sure how "Lisp" fits into the "don't lie" model; it's on a completely different scale. I'm just concerned with how things are represented. UDP doesn't lie; it says up front that its datagrams are unreliable. TCP on the other hand pretends to be a reliable ordered stream; that pretense comes at a cost, and sometimes fails anyway.

Another one that came up recently was Golang vs Rust and how they handle file paths (or pretty much anything really). Golang takes the path of least short-term resistance, and tries to make things easy by pretending paths are strings. Turns out that isn't always true. Rust in contrast works hard to ensure correctness; for file paths they use a custom OsStr type that covers the possible cases. There are lots of examples where Rust uses result types to present a more complex but accurate set of outcomes.

> Not lying is not easy. You have to take responsibility for basically every misunderstanding someone may make with your ideas.

That sounds like a good point, but after thinking for a second I don't think I agree. If I was really trying to guarantee that no one misunderstood my work, that would be a problem. However, that's not the objective of the principle, which is a guide for choosing representations. Apply the same logic to human honesty. Is it really that impossible to avoid lying in conversation? Am I suddenly lying if someone misinterprets what I said?

I think avoiding lying in abstraction design should be similar to personal honesty in conversation with other people. Don't intentionally misrepresent something. If someone misreads the specification and makes a mistake, that's their fault.

I will go a step further though and say that simply having a disclaimer doesn't match the spirit of the principle. Surely the TCP documentation (or a little critical thinking) will reveal that it can't truly make streams reliable. The problem though is that it tries, and wants you to believe it does except for "rare circumstances." I would prefer a system built using UDP with full expectation of the potential failures, than one on TCP that thought it had covered all the edge cases that mattered only to run into a new one.

I guess the Erlang "let it crash" philosophy is almost a corollary. If you don't have illusions that your code won't crash, and prepare for that worst possible case, then any unhandled errors can just be generalized to a crash.

The purpose of the principle is to make better designs by giving more accurate and flexible options to the user. If a design choice is between exposing the internals as they are, or trying to cover up the complexity to coddle the user, choose the former. Give the user the flexibility and power of handling the details themselves (possibly via library). Don't lie.

-----

3 points by rocketnia 1443 days ago | link

I like this whole discussion, akkartik and shader. You've both made a lot of excellent points, and I found myself nodding along to one comment only to nod along to the next as well.

Right now I have a lot to say about the math analogy in particular. (The rest of the discussion has been great too.)

Mathematics has a lot in common with even the messy parts of programming.

Mathematics involves a lot of computation, and not necessarily of the digital computer kind or the arithmetic kind. If a human reader has to look up a definition of a word they don't know, then they're practically having to perform a manual lambda calculus substitution step. A lot of popular concepts in math are subject to transitive closure, which gives them indirect consequences hidden away on non-obvious reasoning paths. A lot of topics in math have to do with metareasoning and higher-order reasoning. Altogether, the kind of effort it takes for a human reader to understand a mathematical claim can involve a lot of the same things that on a computer we'd consider program execution.

As much as people might not like to admit it, mathematical theorems don't always take the form of the precise "don't lie" abstractions shader is describing. When a proof has a flaw in it, people still make use of the theorem, either by conjecturing that it's true anyway for some as-yet-undiscovered reason, or by explaining why the flaws in the proof don't matter in this context. In domains where it makes sense to change the mathematical foundations (e.g. deciding to use a different logic or a different set theory), a lot of the bread-and-butter theorems and concepts of mathematics can end up having flaws in them, but instead of coining new names for all those theorems and concepts, it's easier to use the same names and merely describe all the little patches that are necessary for them to work. So I think mathematics makes use of its share of abstraction-breaking techniques, techniques a software engineer might simulate with some combination of dynamic scope, side effects, preprocessing passes, code-walking, aspect-oriented weaving, dependency injection, or something like that. (This is mostly visible to people who are trying to make the math precise enough that a computer can verify or assist with it.)

Of course, math is not quite the same as software engineering. Unlike software, math is written primarily for humans to understand, and it only incidentally has computational aspects. This influences the kind of BS that's possible with math, both for better and for worse.

- For the worse: Once a mathematical argument goes on for a bit too long, humans rarely have the diligence to require that every single part of that reasoning makes sense; they're content to give leeway to some parts that they already feel they understand clearly. Some popular points of leeway end up serving as the foundations for a lot of mathematics, and we might call those the "syscalls" of math. Of course, every paradox of barbers and liars and time travel and infinity and whatnot reveals that humans are stubbornly hospitable to inconsistent ideas, and software engineering shows that humans' leeway leaves room for bugs on an extremely frequent basis.

- For the better: Since humans are in the loop when it comes to reading and sharing mathematical results, the kind of BS that confuses and dismays people has some trouble thriving. If the effort it takes to apply a mathematical concept is too full of hacks and spaghetti, people probably won't find it to be their favorite concept and won't share it with each other. (Of course, there seem to be some concepts which make a lot of sense once people get to know them, but still seem to require a rather circuitous route to learn about. In this way, people can end up being enthusiastic about parts of math that look, from the outside, to be full of nope.)

Considering all that mess, math nevertheless has a reputation for leakless abstractions, and that reputation is well deserved. "The study and development of leakless abstractions" would be a fitting definition of mathematics. The mess comes from the fact that humans are the ones discussing, developing, identifying examples of, and using the abstractions.

Likewise, even if software engineering deals with a big mess of leaky abstractions a lot of the time, the leakless ones are an important part of the design space. Unlike hardware, software code is a mathematically precise chunk of data, and the ways we transform it and compile it are easily a mathematical topic with lots of room for leakless abstractions. The reason (and perhaps the only essential reason) for the mess is that humans are the ones discussing, developing, making hardware for, and using the software.

While it's clear that math and software are two worlds with notable differences -- distinguished at least by the presence of computer hardware that gives a user meaningful value out of using software they don't know how to maintain -- I believe software could very well develop a popular perception as a world of Platonic forms, the same kind of perception math has. It's not that farfetched for people today to say, "obviously, as soon as you put an algorithm on a device and execute it, it's not the same algorithm." What if someday people say nothing we build or do can be a "true" algorithm because an algorithm is a Platonic concept that our world can only approximate?

Is that the right perception for software? Well... is it the right perception for mathematics? I think the perception doesn't matter that much one way or the other. Everyday software can have leakless abstractions of the same kind everyday mathematics is known for, and many of math's abstractions are actually riddled with holes in ways software engineers might find familiar.

-----

2 points by shader 1464 days ago | link

I have a metaphor I like to use sometimes to describe the perils of over-extending metaphors. That's not exactly the same, but maybe it's relevant enough to share.

  A metaphor is like an old rusty wheelbarrow; if you put too
  much into it and push it too far, it will break down, you'll
  trip over it, cut yourself, get infected with tetanus, and
  end up in the hospital filled with regret.
I'm still not satisfied with the ending and tweak it slightly each time. It fits the pandemic humor rather well though.

-----