"At best the number of different, incompatible systems will be lower over time. But there's no reason to believe that dissatisfaction with prior solutions will be more likely to build on them."
Could you clarify this? I think there might be a typo in here, but I don't know where.
Say a 95% solution leaves 1 in 20 hackers dissatisfied, where an 80% solution leaves 1 in 5 hackers dissatisfied. The number of dissatisfied hackers goes down. But won't they still continue to react to their dissatisfaction by creating new libraries that don't build on prior attempts?
"The number of dissatisfied hackers goes down. But won't they still continue to react to their dissatisfaction by creating new libraries that don't build on prior attempts?"
Who's saying they won't? If you're willing to believe that 100% solutions will lead to fewer dissatisfied hackers and less duplication of effort, I can't tell what other claims you're trying to challenge here.
---
To bring in my personal goals, I'm interested in using programming to improve the expressiveness of communication, so that we have less severity in our petty misunderstandings, our terminology barriers, etc.
While I'm interested in reducing duplicated effort or saving people from dissatisfaction, I'm mainly interested in these things because they go hand-in-hand with establishing better communication. If hackers are more productive, they can communicate more; and if they communicate more, they can find satisfactory tools and avoid duplicated effort.
---
I think it'll be more interesting to look the "80% solution" idea with a scenario that combines more than one solution, so that it's not a simple feedback loop anymore.
Suppose we have two completely unrelated projects A and B, with A being a 95% solution and B being an 80% solution.
Now let's say some hackers have goals in the A+B domain, so that people might suggest for them to use project A and/or B as part of the solution.
Only 76% are actually satisfied with A and satisfied with B at the same time. About 4/5 of the remaining hackers can still build on top of A, but they have to reinvent the functionality they hoped to get from B.[1]
Unfortunately, sometimes it's very difficult to combine the two dependencies A and B in a single program. If A and B involve different operating systems, different programming languages, etc., then the task of combining them may be even more difficult than reinventing the wheel. In these cases, the 76% quadrant of happy hackers must redistribute itself among the other quadrants.
How does it get distributed? Well, the user community itself is a feature of the system, so I'm assuming a 95% solution tends to have a more helpful community than the 80% solution by definition. We also haven't considered any reason that a helpful hacker would make a different technology choice than an unhelpful one, so I'll make the simplifying assumption that all communities have the same population-to-helpfulness ratio.[2] So I'd guess the 76% "+A+B" quadrant is necessarily redistributed to the 19% "+A-B" and 4% "-A+B" quadrants while roughly preserving that 19:4 ratio.
+A -A
+B 0% 17%
-B 82% 1%
This fuzzy reasoning suggests that if an 80% solution B is incompatible with a 95% solution A, approximately 83% of hackers won't use B to achieve their A+B goals. So we see, an 80% solution can leave the vast majority of hackers dissatisfied.
But 80% solutions must be acceptable in our software, or else all our software will be monolithic projects that take lots of resources to develop and maintain, and we won't be communicating very effectively.
Personally, one thing I take away from this is that the world could really use a near-100% approach to modularity, so that almost any two well-designed projects A and B can be used together. I've been working toward this.
(However, I also think this could backfire to some degree. Reusable software that doesn't bit-rot is in a way immortal, and certain immortal software may get in the way of or add noise to our communication.)
---
[1] While this 4/5 may look like it corresponds to the 80% solution, that's just a coincidence. This 4/5 roughly comes from the remaining 20% and 5% that each solution doesn't cover. We'd get a similar 4/5 result if we compared a 96% solution with a 99% solution.
[2] Some reasons this might be wrong: A big community invites people to use it as a symbol for inclusion-exclusion politics, even if that's non-sequitur with the original purpose of the community. Sometimes technology has lots of users not because its community is helpful or political, but because few people have the kind of expertise it would take to reinvent this wheel at all (not to mention how many people don't even know they have a choice).
Hmm, my interpretation of the socket complaint (that I quoted above) was that it was a qualitative rather than a quantitative argument.
Ah, I think I've found a way to reframe what Olin Shivers is saying that helps me make sense of it. The whole 80% thing is confusing. The real problem is solutions that don't work for the very first problem you try them out at. That invariably sucks. Whereas a solution that is almost certain to support the initial use case of pretty much anyone is going to sucker in users, so that later when they run into its limitations it makes more sense to fix or enhance it rather than to shop around and then NIH ("not invented here", or reinvent) a new solution for themselves. In this way it hopefully continues to receive enhancements, thereby guaranteeing that still more users are inclined to switch to it.
:) I'm using some critical-sounding language there ("sucker") just to help highlight the core dynamic. No value judgement implied.
---
I think both of us are starting with the same goal of helping hackers communicate better, share code better. But we're choosing different approaches. Your approach of fixing modularity is compatible with the Olin Shivers model, but I think both of you assume that receiving enhancements is always a good thing, and things monotonically improve. I don't buy that more and more hackers adding code and enhancements always improves things in the current world. There's a sweet spot beyond which it starts to hurt and turns away users who go off trying to NIH their own thing all over again.
My (more speculative) idea instead is to ensure more people understand the internals so that the escape hatch isn't NIH'ing something from scratch but forking from a version that is reasonable to them and also more likely to be understandable to others. By 'softening' the interface I'm hoping to make it more sticky over really long periods, more than a couple of generations. I don't think modularity will work at such long timescales.
It's not packaged up, all the internals are hanging out, some assembly is required. But it enumerates the scenarios I considered so that others can understand precisely what I was trying to achieve and hopefully build on it if their needs are concordant.