Arc Forumnew | comments | leaders | submitlogin
3 points by shader 1461 days ago | link | parent

I feel like much of the disagreement in this conversation comes down to a question of degrees, as you mentioned in your other comment.

Many of your comments seem to imply an absolutism that you may not actually mean; I know I sometimes overstate my arguments when I'm trying to keep a point concise anyway. Or possibly I'm being pedantic by requesting that you qualify all of your claims.

I agree that laws and software last a long time. Most other systems would wear out and need replacing eventually. The need to explicitly repeal laws gives them incredible inertia, and is a good argument for sunset clauses. At first glance software seems like it could last as long, since it's only information, but in practice I think the underlying systems and surrounding environment change enough that the software must also adapt or die.

Perhaps it's a change in perspective. If you look at the software application itself, it doesn't have to change and can last forever. But systems change, and eventually leave the application behind. That's a (possible) victory for those of us on the side of active replacement.

> As a result they're impossible to safely delegate.

That doesn't follow. Long-lasting results could just as easily be an argument for hiring an expert to do the best possible job. E.g. an architect for a stone cathedral vs. a diy shed.

> You can't pay programmers to write code for you.

I would think that the existence of the software _industry_ rather than a network of niche artisans flatly disproves this statement. People do hire others, and it proves to be a more effective and profitable way to develop usable products than trying to do everything by oneself.

> because they'll drown you in technical debt

If that's the only argument against hiring someone else, it's not a very good one. For one thing, companies can choose to set style and documentation requirements, so that little information is lost when they leave. On the other hand, I am quite capable of drowning myself technical debt on my own; I might even be more likely to take shortcuts if I have to do everything myself. To paraphrase my father, software written by someone else is hard to understand and maintain, and software you wrote more than 6 months ago was written by someone else.

I think my main point in all of this is to try to convince you not to be so pessimistic, because a lot of the things you're focusing on either aren't actually that big of a problem, or aren't necessary to solve. Part of it is also about strategy. I generally agree your design decisions and objectives, but based on the HN discussion it's going to be a hard sell if you present your ideas in direct opposition to everything else. Someone pushed back on the idea that everyone needs to understand implementation details. Truth is, they're right, but instead of explaining how the availability of details can still be valuable even if they aren't needed by everyone on a daily basis, you tried to counter. I'm not holding that against you; I have a tendency to do the same thing (it's easy to just respond point-by-point to all the flaws in a post, and forget that I should be working towards an objective). But I think you can do better.

Still, kudos to you for getting this far in the first place. I have too much fun talking about ideas to actually build them.



3 points by akkartik 1460 days ago | link

I went back and reread my grandparent comment here, and I actually wouldn't change much if I was phrasing it more carefully. I'm much more certain about the problem than about Mu as a solution to it.

The one thing I would change is to add a "given current technology". I don't intend to suggest that the problem of delegating rule-making is destined to be forever impossible. But it does seem about as difficult as the fictional science of psychohistory (https://en.wikipedia.org/wiki/Psychohistory_(fictional)). It's not a hard science, but requires far better understanding of human nature than we currently have.

> > As a result they're impossible to safely delegate.

> That doesn't follow. Long-lasting results could just as easily be an argument for hiring an expert to do the best possible job. E.g. an architect for a stone cathedral vs. a DIY shed.

Let me requote myself to add the sentence it follows from: "Both are systems of rules that have unanticipated long-term effects on the lives of people. As a result they're impossible to safely delegate." The crux here is the 'unanticipated'. A badly built stone cathedral can cave in, that's about the worst case. If you were rich enough that may seem like a reasonable worst-case risk. You can protect against it with some diversification. But installing the wrong program on your computer can cause the whole to become compromised. And it's not just a theoretical possibility either. In rules and software, action at a distance is the norm rather than the exception.

> I would think that the existence of the software _industry_ rather than a network of niche artisans flatly disproves this statement. People do hire others, and it proves to be a more effective and profitable way to develop usable products than trying to do everything by oneself.

You're right, when I say "impossible" I mean what should be rather than what is. I would not hire a programmer (even though I am one and I get paid by others; this is a painful self-criticism to admit to) and I would not recommend any of my loved ones do so. We all rely on hired programmers. I have grown painfully aware of this exposure. At least we should try not to increase this reliance.

The current software industry is not "effective", for any meaning of the term that is related to the end user over any reasonably long term. It certainly is profitable. I'd much rather the world had been a lot more measured in its incorporation of software into every aspect of modern life.

-----

2 points by shader 1460 days ago | link

> I don't intend to suggest that the problem of delegating rule-making is destined to be forever impossible. But it does seem about as difficult as the fictional science of psychohistory

I wasn't thinking along those terms, but the clarification is still helpful. Also, I like the reference to psychohistory. A cool idea, but completely unrealistic given chaos theory. If we can't even manage the dynamics of the logistic map [0], successfully predicting the behavior of more complex nonlinear systems is even more unlikely.

I'm still not sure it's fair to characterize all of software as unpredictable or having unpredictable effects though. The limit of the iterated logistic map is unpredictable for certain parameter ranges, but that doesn't mean that all math functions have unpredictable limits. I think for the purposes of this discussion it would be very helpful to identify which systems do and why, so that we can better understand what should or even could be done about it.

To some extent I accept the unpredictability premise, because software is written and used in a social context (which your statements about psychohistory imply is your primary concern) which has very complicated dynamics. But at the same time, the software artifact itself is basically static information that doesn't have any dynamics at all.

This emphasizes the question of which effects you are concerned about?

> In rules and software, action at a distance is the norm rather than the exception.

In contrast to your statement of needing better understanding of human nature, this statement makes it sounds like the dynamics and unpredictability you're considering is the behavior of software components in composition with other components. Which is it? Even so, I think in principle I could agree with that as well; in general unless you know what's in the components, and what all of the components of the system are, you can't predict what the behavior of a system will be. Specific cases are another matter, however.

That said, there's lots of work on atomicity and confinement that can help with identifying boundaries. Imperative code is exposed to possible side effects when invoking functions, but functional languages are not. I think proper security models can also help with this; in an object-capability system, nothing can perform operations but that you provide them with that capability. Loose coupling (services, protocols, dataflow) limits the possible externalities as well. I grant that the 'action at a distance' problem exists, but there are other possible solutions besides "avoid delegation entirely".

> I would not hire a programmer [...] and I would not recommend any of my loved ones do so

Are you really suggesting that the only software anyone uses should be software they wrote themselves? That hardly seems scalable or effective. Like subsistence farming. One might think that subsistence farmers have more control over their diet, since they get to control most of the inputs. They can hypothetically choose which varieties of vegetable to plant, and what fertilizers and techniques to use in growing them. In practice though, a true subsistence farmer is very constrained to use only what they can produce, and vastly more exposed to the environment since they have no alternative sources. Buying food at a modern supermarket does give one less direct control over how the food is grown, but there are many, many more options to choose from around the world. Perhaps you won't find exactly what you want, but you're much more likely to be able to find it year round.

The software available for sale may not be what you want, but you have lots of choices and you don't have to buy any of it. I can agree that being able to understand the internals could help make better choices, but I can't understand an absolute injunction against delegation.

> The current software industry is not "effective", for any meaning of the term that is related to the end user over any reasonably long term

I think this is probably the key point. How would you define "effective"? Given that people do pay for software, they must consider it effective for their objectives.

I suppose another line of reasoning might be to consider not a specific configuration of software, but rather the changes in its objectives over time (which may have been what you were alluding to with the reference to psychohistory). In that case, the specific objectives don't matter but rather some level of adaptability. That fits with what you've said earlier about being able to jump into a system and make changes with only a few hours of effort to understand the surrounding behavior. I can definitely see how that would be an argument for easily understandable and accessible internals, similar to the car maintenance analogy. However, I still don't see an argument against delegation.

Indeed, another key question is: what are you proposing is different about writing software oneself than letting someone else do it?

A third party could easily be better at building maintainable and understandable systems than myself (I could have hired you to build Mu...), and I can be cautious to hire only people I trust and take security precautions. If it has something to do with consequent understanding of the system that was built, I would argue that after a fairly short amount of time all of those differences are erased by forgetfulness. As I said before, any software you wrote 6 months ago was written by somebody else.

Again, I am not at all questioning the value of accessible and comprehensible internals, just the seemingly unrelated arguments against delegation etc.

----------

[0] https://en.wikipedia.org/wiki/Logistic_map

-----

2 points by akkartik 1460 days ago | link

> A cool idea, but completely unrealistic given chaos theory.

Agreed.

> In contrast to your statement of needing better understanding of human nature, this statement makes it sounds like the dynamics and unpredictability you're considering is the behavior of software components in composition with other components. Which is it?

When the mechanisms are simple and easy to reason about, it's easier to delegate because worst-case analysis of human nature is possible. When the mechanisms are powerful, we need greater understanding of human nature to navigate the complexity of incentives and unforeseen consequences.

> Are you really suggesting that the only software anyone uses should be software they wrote themselves?

Yeah, this bears clarifying. I'm not hung up on some sort of notion of purity that I came up with. I wouldn't make this recommendation in the '80s. Then things didn't seem so far gone. My recommendation takes into account the amount of software and complexity and tragedy-of-the-commons effects we've already introduced into the world. We have enough bespoke software. We don't need more that is driven by extrinsic motivations like pay. We need more software driven by intrinsic motivation, that is more likely to try to grapple with long-term consequences. Someone inexperienced may still pollute the well, but they may learn from their past decisions. Software professionals like me are largely unlikely to think hard about these things, because our salaries depend on us not understanding.

When the first truffula tree was cut down I wouldn't get too hung up on it. (Assuming I had the same worldview I have now.) But at some point enough trees get chopped down that it gradually gets more and more urgent to not dig ourselves deeper.

Hopefully this clarifies things even if I didn't answer every last question of yours. Do let me know if you'd still like me to answer some other bit.

-----

3 points by shader 1459 days ago | link

> When the mechanisms are simple and easy to reason about, it's easier to delegate because worst-case analysis of human nature is possible. When the mechanisms are powerful, we need greater understanding of human nature to navigate the complexity of incentives and unforeseen consequences.

That helps tie the two together, thanks.

I'm still not convinced though on what you think the downsides would be, or why; specifically, why they couldn't be bounded and partitioned in some way, or why you have to get them all correct up front instead of adapting later with better information.

If we look at the truly long term, the butterfly effect might indeed paralyze us as to what the potential consequences of any single action might be. But a butterfly-flap does not directly cause a hurricane; they are connected by a myriad of intervening causes, many of which could have stopped the storm. Planning ahead is good, but how far? Specifically, how far given how little we actually control.

> We need more software driven by intrinsic motivation, that is more likely to try to grapple with long-term consequences.

This helps a lot for explaining your vision, and sounds much more agreeable.

I read After Virtue by Alisdair MacIntyre [0] recently, and this sounds like an appreciation for the virtues necessary for the successful practice of software development. It would be interesting to explore that further.

> the ... tragedy-of-the-commons effects we've already introduced into the world, But at some point enough trees get chopped down that it gradually gets more and more urgent to not dig ourselves deeper.

Mentioning a tragedy-of-the-commons and an analogy of tree-chopping makes it sound as though industrial software was producing destructive and indelible environmental effects. The idea of "digging ourselves deeper" also implies that it's cumulative and aggregate. That is, something like there being a "bad-software gas" effect, and every emission of bad software contributes to it and won't easily dissipate.

I think this is a contrast I keep trying to come back to; maybe eventually I'll figure out how to state it clearly.

Chopping a tree is destructive and permanent; it doesn't respawn, though eventually new trees might grow. This makes tree-chopping cumulative unless managed properly. It is also really does reduce the aggregate number of trees and thus whatever benefits trees provide the environment. That is, lost trees could hypothetically affect everyone. Eventually, tree-chopping could add up to something significant. Furthermore, each tree chopped is a larger percentage of the remaining trees, so later chops may be more serious than earlier ones.

I don't think any of these attributes apply to software though. Writing software is constructive. Each project is independent, and does not cause any effects unless directly invoked or linked. This means new projects create more options, but not more liability; there is no aggregate externality. There are hundreds of thousands of projects on Github, and thousands of more companies with private source, and I am aware of and affected by very few of them. If one of those projects causes harm when I link it into mine, that's as much my fault as the original author's because I chose to link it.

The existence of vague, aggregate negative externalities for software needs further justification. Specifically identifying the issue may also help identify better how to fight it.

I think a common error in macro analysis (such as in Keynesian economics) is to treat members of a type as an aggregate quantity. The capital in an economy isn't a homogeneous quantity C that can be easily shifted to different industries, but actual machines and experts coordinated in a structure of relationships that can't be so easily changed. Software doesn't exist as some abstract quantity S that increases or decreases and correlates to a quantity of harm H. Each project is unique, and used by other projects in unique ways - or may not have any effect on them, if they are not so connected.

> Hopefully this clarifies things even if I didn't answer every last question of yours. Do let me know if you'd still like me to answer some other bit.

That's fine; it's enough if my questions help you understand my perspective and confusion, so you can better address them. I'm trying to work out more precisely my thoughts on these things as well. The better I understand the problem and the value of these particular software virtues, the better I can incorporate them into my own work.

---------

[0] https://en.wikipedia.org/wiki/After_Virtue

-----

2 points by akkartik 1459 days ago | link

You said above that chopping a tree is permanent, even though new trees may grow. We're of one mind there. It's worth dwelling on this distinction, because it is the essence of what I'm trying to get across.

It's not about the single tree. Trees grow and die, and every step of the process seems natural in isolation. The problem with externalities lies in the scale, in the disruption of the natural balance between competing forces.

Yes, individual rules are not directly connected to each other, and their effects are often isolated. The trouble lies in their implementation and how the interactions of implementations of different disparate rules often reduces the degrees of freedom in future rule making/management.

Say you have 100 people creating new rules or software. Each of them only cares about a few projects of rule-making, and the different projects are largely independent. If 90% of the rule-makers are making badly designed rules, the whole will gradually grow unmanageable. This story doesn't depend on what the rules are about in the real world. All we are concerned about are the implementation details. Are they implemented parsimoniously, or is the overall effect a result of two clauses combining from hundreds of pages apart? Are they easy to understand, or are they deliberately phrased in a convoluted manner to make it difficult for newcomers to join in the business of rule making? Do they follow some meta-rules consistently, or do they normalize deviance (http://lmcontheline.blogspot.com/2013/01/the-normalization-o...)? There are many such tests, and we think of them as 'design'. The design of a system is a meta-commons even if each of us has a different use case within it.

Even if you think you don't care about the rules most people follow, the rules care about you. For example, it's harder for me to find other C++ programmers to collaborate with because the pool of people who care about the same domain as me is further diluted by people who care about the same subset of C++ as me. The parts of C++ I don't use still affect my life.

-----

3 points by shader 1455 days ago | link

Our point of difference here regards externalities. You seem to think they are pervasive, largely irreversible, and negative. I'm still not sure that externalities exist or what they would be, let alone why they would be irreversible or negative. What about positive externalities, as good practices are recognized and recommended by people to their peers?

> The trouble lies in their implementation and how the interactions of implementations of different disparate rules often reduces the degrees of freedom in future rule making/management.

Within a legal jurisdiction all laws are effectively "executing in the same scope." Thus it's obvious and natural that they could interact with each other. Less obvious that such interactions are negative, or why that would make it harder for new people to write rules. Clearly our legislators have no difficulty pumping out more and more laws every year...

In software, the implementations don't run in the same scope, unless deliberately made to do so. And again, software projects aren't compulsory or indelible. They can be deleted, ignored, replaced, etc. very easily.

I might understand if you focused on the social implications of the software, such as one person using a code base, thinking its patterns were normal, and replicating them elsewhere. But you specifically mention the implementations and that somehow interactions between implementations reduces degrees of freedom. How? Where? Within a project, or across projects?

> Say you have 100 people creating new rules or software.

This whole section mixes your analogy and your subject. Many of the statements are obvious or at least reasonable for a legal system, but not obvious for software; at least, not software as a whole. Perhaps an individual project can fit the analogy fairly well. A program may, per your analogy, have complex behavior caused by spaghetti interactions across hundreds of pages, which would make things difficult for a newcomer to change. Such things repeated as patterns across a project do constitute the "design" or architecture of it. Do such patterns exist across all of software though, when anyone can start a new project in an empty file?

I appreciate the mention of "normalizing deviance", it's potentially a useful concept for this discussion. I think you're referring to the abstract pattern of normalizing deviance from what could otherwise have been consistent meta-rules, as opposed to a specific deviance? I'm not sure where that leaves me though.

I can almost imagine the "meta-commons" you mention, but I have no confidence that it is anything other than "common sense". Yes, people have habits of thought and behavior informed by their past experiences; there's no reason they can't learn and do better on their next project though.

> Even if you think you don't care about the rules most people follow, the rules care about you. For example, it's harder for me to find other C++ programmers to collaborate with because the pool of people who care about the same domain as me is further diluted by people who care about the same subset of C++ as me. The parts of C++ I don't use still affect my life.

It sounds like you're describing a situation where "bad" approaches steal mind share, reducing the number of people available for doing "good" work. Or, in your example, the existence of other domains (rules?) creates negative externalities for you by diluting your supply of collaborators. This sounds like a zero-sum fallacy. You're assuming that those people would still be C++ programmers and that they would be compelled to share your interests for lack of options, or that the other domains poached collaborators that were rightfully yours. Generally it works the other way though; the overall size of the market is increased by the existence of competitors, who share marketing costs in a larger economy of scale. Thus, because of those other domains, people with different interests are drawn into C++ programming would otherwise wouldn't be, and maybe some of them might eventually discover yours and collaborate with you. Sadly we can't run multiple experiments to find out which universe you have more collaborators in though.

-----

3 points by akkartik 1453 days ago | link

This is a helpful summary of our differences. The keystone is whether zero-sum is a good framing or not.

I think we should by default assume we're in a zero-sum regime anytime people's attention is involved. For example, think back to the SOPA/PIPA boycotts a few years ago when every website turned off for a day. That's a trick you can only pull a finite number of times before people grow jaded.

Similarly, someone is only going to try to learn programming a small number of times. And every time they think they failed increases the energy barrier for the next time. Burnout is the primary concern here, to my mind.

The same principles apply also to experienced programmers learning about new software. Burnout is the prime enemy, but the concern of minimizing burnout is on no author's mind. Nobody owns the design goal of comprehensibility of internals. Lack of ownership is the essence of an externality.

When it takes too much effort to comprehend a piece of software, people can no longer keep up with it on their time. They have to be paid to do so. That biases the playing field between insiders and users. A small number of people working on a popular piece of software can have disproportionate influence on the lives of people.

To me the argument of the previous paragraphs is ironclad. Which part do you disagree with? Is software already easy to comprehend? Is comprehension to outsiders not the #1 problem in software? Is the difficulty in comprehension not because of tragedy-of-the-commons effects?

---

I'm going to stop debating the non-software side of things, since I'm not an expert there and I don't really have any solutions to offer. I strongly feel the problems of limited attention carry over there. But it's probably easier for me to convince you of that in the realm of software. Oh wait, one final note:

> Clearly our legislators have no difficulty pumping out more and more laws every year...

Have you not met programmers talented at churning out crap? The difficulty is not in writing new laws, but in having the whole make sense. At least in software the computer enforces some basic checking. If the program crashes, that's visible to all. Contradictions in laws can fester for long periods until someone works up the resources to take a case all the way to the Supreme Court. (Why in the world don't US courts deal with counterfactuals? The whole principle of "case or controversy" (https://en.wikipedia.org/wiki/Advisory_opinion#United_States) is bound up in pre-software thinking. In my ideal world courts would be able to give feedback on bills even before they are turned into Law, and actually influence how legislators voted on them. "Have you considered this corner case?")

-----