This is a very good point that has been proven over and over again in the industry. I recall being at Sun and having the argument over ONC and whether or not it should be "open" (which at the time meant everyone could get a copy of the code[1]) or "closed". Ed Zander was a big fan of keeping everything secret, after all anyone could reproduce it if they had the code right? And I used the same argument as the author, which is that if someone was a decent programmer and willing to invest the time, they could recreate it from scratch without the code so keeping the code secret merely slowed them down fractionally but letting our licensees read the code allowed them to better understand what worked and why and could release products that used it faster, which would contribute to its success in the marketplace.
I lost that battle and ONC+ was locked behind the wall until Open Solaris 20 years later. So many people in tech cannot (or perhaps will not) distinguish between "value" and "cost". Its like people who confuse "wealth" and "money". Closely related topics that are fundamentally talking about different things.
This is why you invest in people and expertise, not tools. Anyone can learn a new toolset, but only the people with expertise can create things of value.
[1] So still licensed, but you couldn't use the trademark if you didn't license it and of course there was no 'warranty' because of course the trademark required an interoperability test.
> if someone was a decent programmer and willing to invest the time, they could recreate it from scratch without the code
This is a stronger claim than the one in the article, which says it's easy for someone to recreate it if they were involved in building it in the first place.
It's not true that every piece of software is trivial to copy in a clean-room way, that is, only being able to observe its behaviour and not any implementation details.
There is a feeling that releasing the code is "giving it away for free". But, being able to compile and deploy it is not the whole story. Enterprises need support from the people who built the thing, and so without that it is not a very attractive proposition.
It could be true in some scenarios though.
Microsoft doesn't open source Windows. A big enough company could fork it and offer enterprise support at a fraction of the cost. It would take them years to get there, and probably would be subpar to what large Windows customers get in support from Microsoft. Yes I know y'all hate dealing with Microsoft support - imagine that but worse. Still, the company with the forked distro would definitely take a bite out of Microsoft's Windows business, if only a small one.
> Still, the company with the forked distro would definitely take a bite out of Microsoft's Windows business, if only a small one.
That has not been shown to be the case. There is ample evidence that other companies would run this 'off market' or 'pirate' version, and zero evidence that if those choices had been unavailable that they would have legitimately licensed Windows.
You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue. If you "ask" for more than your product is "valued" then it won't be purchased but it may be stolen. And if you make it "impossible" to steal you will reduce its value to legitimate customers and have zero gain in revenue from those who had stolen it before (they still won't buy it).
The "value" in Windows is the number of things that run on it and the fact that compatibility issues are "bugs" which get fixed by the supplier. We are rapidly reaching the point where it will add value to have an operating system for AMD64 hardware that is overtly governed (not Linux or FOSS) which allows you to get a copy of the source when you license it, and has an application binary interface (ABI) that other software developers can count on to exist, not change out from under them, and last for 10+ years.
As Microsoft (and Apple) add more and more spurious features which enrich themselves and enrage their users the "value" becomes less and less. That calculus will flip and when it does enterprises will switch to the new operating system that is just an operating system and not a malware delivery platform.
> You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue.
That works for individuals. In many (most?) countries, the calculus for companies is vastly different. It takes one disgruntled employee or a bad dice roll to end up being audited for use of pirated software; between the regulators in many countries being on the side of the copyright holders on this, and the company itself being a much easier and juicer legal target than a bunch of regular people, the costs of getting caught using a bootleg Windows copy commercially far outweighs the costs of just licensing it.
With Windows also providing genuine value, the choice for companies isn't between licensing or pirating - it's between licensing, or sacrificing some other part of the business to scrounge up money for licensing, or not doing the business in the first place.
(Yes, the boundary between individuals and companies is fuzzy; this argument is somewhat weak for some classes of sole proprietorships, but generally solidifies quickly as the headcount of an org grows towards double-digit numbers.)
>>have an operating system for AMD64 hardware that is overtly governed (not Linux or FOSS)
Not understanding this part, aren't Linux distros achieving this already without licence restrictions and various levels of stability depending on the distro selected?
A huge amount of enterprise tooling is now being run on the cloud through the browser or via electron - for a large number of businesses, their staff would only need the equivalent of a Chromebook style GUI to perform their work.
Native software is still essential for a small % of users.. is this what you're suggesting needs to be solved? A single alternative open source system (OS or VM?) that the software dev company can target.
>Not understanding this part, aren't Linux distros achieving this already without licence restrictions and various levels of stability depending on the distro selected?
No. Ask yourself, if I install distro <pick one>, can I run a complex binary from 2015 on it? To pull of that kind of stunt you need to ensure you have control over changes not only in the kernel, but also in all of the associated user libraries and management tools. There are change paths for everything from how daemons get started to how graphics are rendered and sound is produced that are incompatible with themselves, much less other versions from 10 years ago. That is not a support burden that someone selling a specialized piece of software can easily take on. It makes their cost of development higher and so their price higher which loses them business.
> No. Ask yourself, if I install distro <pick one>, can I run a complex binary from 2015 on it?
Does a Go binary count? Half joking, but this is why “builds statically, only depends on syscalls” is making inroads. Also applied to static linking musl.
I get that you're kind of joking but your right! Because nobody can "change" the Win32 ABI except Microsoft you don't get contributors pushing various "feature improvements" on it, not that there aren't a bunch of things one might do differently than the way the Win32 API does them, right? It's that externally enforced control that isn't possible with Linux/FOSS ecosystems. The 'why' of that is because people like Canonical can't afford to pay enough engineers to 'own' the whole system, and their user base gets bent out of shape when they do. It breaks the social contract that Linux has established.
The only way to change that is to start with a new social contract which is "You pay us to license a copy of this OS and we'll keep it compatible for all your apps that run on it."
While I sympathize with your need, I don't think we'll see a new OS fill this space.
Firstly, there's the obvious "all the apps you run on it". Your new OS has no apps, and even if a few emerged no business really wants to commit to running on a new OS with only a couple apps.
I mean, if you want a stable OS there's always BSD, or BeOS or whatever. Which we ignore because, you know, Windows. (And I know it's fun to complain about ads on windows and Microsoft in general, but there's a reason they own the market.) OH, and business users don't see the things folk complain about anyway.
Personally I have utilities on windows that were last compiled over 20 years ago that still run fine.
Secondly no OS operates in a vacuum. You need to store data, (database) browse the web, communicate, secure traffic and so on. Those are very dynamic. And again (by far) the most stable place to run those things is Windows. Like Postgres 9, from 15 years ago, is still used in production.
Of course it's also possible to freeze any OS and apps at any time and it will "run forever " - or at least until the version of TLS supports dies.
So no, I don't believe there will be a new OS. Windows Phone died because there were no apps. Your new OS will have the same problem.
> You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue
An astute reader would find I am not in fact making that argument, and I suspect if we got into the weeds with it, we would find we agree with each other.
Couple months back, someone posted how they lost a days work due to hard drive crash and had to redo it. It took them roughly 30 minutes.
Their point was the same as this article with a shorter time window. Knowing what to do, not how to do it, is 90% of the battle.
But that is counterintuitive to the lay observer of software. They think they know what to do, because they’ve got ideas, but feel inhibited because they don’t yet know how to achieve them. So they assume that their immediate hurdle must be the hard part of software development.
That's a lack of financial sense.
In finance, we learn about the time value of money.
Code allows someone to go faster. This is literally a repository of time. So it is valuable.
When I look at coders, I sometimes get the sense of them being akin to financial anarchists. Maybe the word is harsh but how can they devalue themselves so much!?
That's beyond me. Nerds, you deserve better. (especially non US based ones)
> And I’d go further than that. I’d suggest that, contrary to what intuition might tell you, refactoring might be better achieved by throwing the code away and starting again.
I don't think this applies in most situations. If you have been part of the original core team and are rewriting the app in the same way, this might be true - basically a lost code situation, like the author was in.
However, if you are doing so because you lack understanding of the original code or you are switching the stack, you will inevitably find new obstacles and repeat mistakes that were fixed in the original prototype. Also, in a real world situation, you probably also have to handle fun things like data import/migration, upgrading production instances and serving customers (and possibly fixing bugs) while having your rewrite as a side project. I'm not saying that a rewrite is never the answer, but the authors situation was pretty unique.
Anyone truly considering this should weigh up this post with the timeless wisdom in Joel Spolsky's seminal piece, 'Things You Should Never Do'[1]. Rewriting from scratch can often be a very costly mistake. Granted, it's not as simple as "never do this" but it's not a decision one should make lightly.
The last rewrite I've seen completed (which was justified to a point as the previous system had some massive issues) took 3 years and burned down practically an entire org (multiple people left, some were managed out including two leads, the director was ejected after 18ish months) which was healthy-ish and productive before the rewrite. It's still causing operational pain and does not fully cover all edge cases.
I'm seeing another now in $current_job and I'm seeing similar symptoms (though the system being rewritten is far less important) and customers of the old system essentially abandoned to themselves and marketing and sales are scrambling to try to retain them.
Anecdotal experience is not so good. Rewriting a tiny component? Ok. Full on rewrite of a big system? I feel it's a bad idea and the wisdom holds true.
Spot on. It seems that OP is considering (1) a rewrite that can entirely fit into the mind of an engineerXYZ, and also (2) will be led by the same engineerXYZ, through executive empowerment.
I guess that in your case probably (1) did not hold. Or maybe (2) did not hold, or both.
OP's experiment doesn't prove at all, that an entire org can rewrite a complex app where 1&2 do not hold. Every indication we have is that org's executive functions perform abysmally for code writing (and rewriting). So exactly the point you are making. It would obviously mean that there is value in code, along the value in the org, once we get above the level of the value that conceptually fits into 1 head.
I'm trying to make a small, efficient alternative to Pip. I could never realistically get there by starting with Pip and trimming it down, dropping dependencies, reworking the caching strategy etc. etc. But because I've studied Pip (and the problems it solves), I have a roadmap to taking advantage of a new caching strategy (incidentally similar to uv's), etc. - and I'll (probably) never have to introduce (most of) the heavyweight dependencies in the first place.
Understanding doesn't have to come from "being part of the original core team". Although if you aim to be feature-complete and interface-compatible, I'm sure it helps an awful lot.
> if you are doing so because you lack understanding of the original code
As I understood it, the key point of the article is that the understanding is the value. If you don't understand the code, then you've lost the value. That's why rebuilds by new folk who don't understand the solution don't work.
Large sweeping software initiatives that go nowhere and are replaced by a product from a more agile team aspect isn't that unique, though the author being on both teams is.
Tell that to the 80% of the employees laid off after Musk bought Twitter.
They used to tell me I was building a dream
And so I followed the mob
When there was earth to plow or guns to bear
I was always there, right on the job
Once I built a railroad, I made it run
Made it race against time
Once I built a railroad, now it's done
Brother, can you spare a dime?
Once I built a tower up to the sun
Brick and rivet and lime
Once I built a tower, now it's done
Brother, can you spare a dime?
We got bought out a number of years ago. We'd been pretty liberal with our code up until this point as I'm sure many tiny companies are, but our new owners were very insistent on locking down "Intellectual Property" to exclusivly company controlled hardware locked behind SSO ready to lock anyone out at a moments notice...
You are putting a pretty basic CRUD app in Fort Knox. We're not building anything super proprietary or patentable, it's not rocket science. Anyone could rebuild something roughly analogous to our app in a matter of weeks.
The code isn't the value. Our connections, contracts and content are our value. Our people and our know how is the value.
The code is almost worthless on its own. The time and thus money we've spent has been far more in finding and fine tuning the user experience than in "writing code". These are things exposed to anyone who uses our app.
You could genuinely email all our code to our direct competitor and it wouldn't help them at all.
One step crazier: Companies that advertise a product and then lock down their API docs so that nobody can see them without being a current customer with need-to-know.
A different perspective is that there is a vast body of result-of-thought-and-experience associated with developed software. That is then lossily encoded in many forms. Memory, judgment, skill, team, contacts, customers, docs, test suite, other assorted software, etc. It's much easier to reimplement a language when you have a great language test suite. Easier to create a product if you already have a close relationship with its customers. Easier to implement something for the 3rd, 4th, 10th time. Etc. And the assorted forms have results they encode well and not so much, and assorted strengths and weaknesses as mechanism. Memories decay, and aren't great for details. Judgment transfers well; leaves. Teams shuffle. Tests become friction. Software works; ossifies.
Insightful synthesis around even a single form isn't exactly common. The art of managing test suites for instance. An insightful synthesis of many forms... I've not yet seen.
I had a similar feeling when I finished my freelance projects, that I would be difficult to replace, or that it would be easier to start from scratch than to try and decipher the system.
That's partly because I was being "too creative" — I love making things from scratch, but that's suboptimal from a business perspective fot several reasons. And partly because I didn't document the decisions very well (except in random comments).
So I had the feeling like most of the value was in my head, and a lot of work would have to be repeated with the next guy.
Aside from just inexperience and lack of professionalism on my part, there seems to be some tension here between what's good / enjoyable for me as a developer (making everything myself) vs what's good for the business (probably WordPress / PHP).
I have a problem that I test new UI frameworks on; generating celtic knotwork. I've written three of these now (React, VueJS, and Go manipulating SVGs). The first was hard because I had to learn how to solve the problem. The others were hard because I had to learn how that solution changed because of the framework.
There's a joy in rewriting software, it is obviously better the second time around. As the author says, the mistakes become apparent in hindsight and only by throwing it all away can we really see how to do it better.
I also sketch (badly) and the same is true there; sketching the same scene multiple times is the easiest way of improving and getting better.
From my little experience I find this article's point to be true. I've been in a few organisations where the flow of people (students) is constant, so experience constantly leaves and mistakes get repeated (we haven't figured out a proper knowledge base). All one can do is try their best to transfer the experience before graduating, but taught lessons don't stick as deeply as firsthand learnings (with the associated toil...)
Hmm. We've often discussed AI as programmer codegen tool, and as vibe coder. But there have been other roles over the decades associated with programming. Perhaps AI could serve as a team Librarian? Historian? Backup Programmer (check-and-balance to a programmer)? A kibitzing greybeard institutional memory? Team Lead for humans? Coach/Mentor? Something else? Mob programming participant?
The software development company that I worked at for 20 years, made a specialised practice management system that was (at the time) years ahead of the competition - a real Windows experience with a real database, where the competition was all DOS based or using Access. At one point, they sold the rights to develop their software in a particular country to another company (they had no plans to enter that market, and were a bit strapped for cash at the time). So the other company got the source code, and - in the spirit of the time - insisted it was printed out on paper too!
The other company never managed to do anything with it in the end - having the source code for the entire product was not enough!
> The design was in my head, and with hindsight I could see all its flaws. I was therefore able to [re]create a much more efficient and effective design based on that learning. All the mistakes had been made, so I was able to get this version of the code right the first time.
Notice the critically important difference of recreating an existing design, vs using the rewrite as an opportunity to experiment on the design and the implementation (and the language, and the ...).
Vetting a new design takes time, consensus, and subjective judgement. Re-implementing an existing design is laser focused and objective.
While people and organizational knowledge are important, I have to disagree with the article. Code has value, tremendous value in fact. It’s the only record of truth of a software product. The code of a working product records the decisions, the designs, solved problems and solved mistakes during the development. Software development is not just writing the code. The code is the end product of the development process which can be long and arduous. Yes. It can be reproduced with skill, time and money, but it can be prohibitively expensive. Thus lays the value of the code.
Edit: case in point, Sybase created the SQL Server. During one of the due diligence of business partnerships with Microsoft, Microsoft “borrowed” a copy of the source code (not sure about the details). After much legal wrangling, Sybase was forced to license it to Microsoft due to the loss of leverage. Microsoft released it as MS SQL Server. It took years and years of work for Microsoft to finally replace the code piece by piece.
>The code of a working product records the decisions, the designs, solved problems and solved mistakes during the development.
Our experiences apparently differ. I've worked on dozens of large scale systems and due to the lack of up to date documentation and comments in the code the developers have had to re-engineer most of those details in order to make even minor changes as the requirements evolve over the years. The code might work, but the knowledge of how and why is generally lost to entropy.
Yes sure, the code might be unreadable but it’s the working copy that any changes based on and run against. Throwing it away and recreating the changes in vacuum would be very difficult.
“All the value is stored up in the team, the logic and the design, and very little of it is in the code itself.”
This is a key reason it’s so important to knowledge-share within teams. For all kinds of reasons, people move on, and any knowledge they hold without redundancy becomes unavailable.
Also a good reason why commenting can help: then maybe a bit more of the value IS in the code.
The value is in keeping the code running. Unless you are doing something very complex from scratch, you are mostly writing code for hire or for your startup. Nowadays, that code does not really take that long to develop. You will be wrapping, mixing, and orchestrating several paid, free, and OSS APIs and frameworks to work together with your solution to the problem. This will take six to eighteen months to complete. Then the value starts. That code is making you or someone else money. If you stop efficiently and correctly managing the code, someone loses money.
> web portal I was involved in developing as part of an all remote team back just before the turn of the millennium. > I can conclude that of the 6 months of time spent by 7 people creating this solution, hardly any of it related to the code. It could be completely discarded and rebuilt by one person in under two weeks.
I bet he did it recently and that undermines his whole thesis. He would need to have redeveloped it before 2000, to support his argument. I would also suspect he only made a toy 80% working example and that it only needed the other 90% to be completed (e.g. administrative or developer focused features). I'm pattern matching with other developers I've heard say similar things.
Information that articles ignore is often critical; moreover we judge articles based on the meta-decision of "what critical information was ignored". The article severely misses some key points.
A better example that a developer is more valuable than the code: when a key member of a company goes off and greenfield develops a competitor that wins (but still not an independent measure due to confoundering effects).
In some situations I would agree with the thesis, but unfortunately the article poorly argues the point.
I can say that in learning go several years ago, rewriting a ~10k LOC app that I had done in python, I definitely learnt some new paradigms in go that allowed me new perspective in what I had done in python.
I would have gone back to fix my python code, but I'm happy with the rewrite in go (runs faster, has far more test cases, which allowed me to add more functionality easier)
And yes, the rewrite took me ~50% of the time, and most of that was due to it being an exercise in learning go (including various go footguns).
"It’s a scary thought. You might even consider it ridiculously farfetched. I wouldn’t expect you to agree with me based on a blog post. However, what I would recommend is that you give it some serious thought, and maybe conduct a similar experiment of your own. If you do, let me know how you get on. I’d be genuinely interested to find out." there are no comments on your blog so i am not sure i can express any of my opinions there
I want to hear the perspective of someone who lost 100kloc and had to rewrite it all!
There is a lot of value in code. It works in prod. It is continuously regression tested by its load, so when there is a problem you figure out a tiny delta to fix it.
If you rewrite from memories you'll get a lot of bugs you need to fix again.
Code being worthless and "must keep PRs small" seem to be in tension.
Another big piece I would add to this is the processes that enable organizations to ship code. Not just processes that are directly related to the product and code, but other organizational processes like hiring, sales, support, etc.
Efficient processes require a lot of thought to develop and implement. When a badly-run organization acquires a good piece of code, it will eventually start to stagnate and bloat.
> finally, there’s the code. That also takes time, but that time is small in comparison to all the others ... The developer’s answer to all of this is “refactoring”. For those of you who don’t code, refactoring is ... This takes time.
Yeah, please never manage a software team. Thanks.
"Programming as Theory Building" 1985 by Peter Naur makes the same case and works out some of the implications. One of my favorite computer engineering papers.
I very much agree with this. Find a job where you can meaningfully contribute to a product that is important to your company (generates a meaningful amount of revenue), and after a little while the natural thing that happens is you become irreplaceable because of that retained value in your head.
It doesn’t have to happen, and with some effort can be somewhat avoided, but it’s the default outcome. Depending on your goals and career aspirations, this can be a wonderful thing, or it can be a bit of a curse.
> No. I achieved this because the code contained very little of the real value. That was all stored in my head. The design was in my head, and with hindsight I could see all its flaws. I was therefore able to create a much more efficient and effective design based on that learning.
Exactly. Code is cheap to write. Even a lot of it. What's hard is understanding a problem thoroughly enough to model a correct solution. Once you have that, you've done 90% of the work.
This is the classic, very domain specific wisdom that does not extrapolate to all software. Some codebases are narrow in focus and rely on solving a problem in a smart way. Other applications just need to hold thousands of data points that come from variety of sources. Here the code holds tens of thousands of priceless details that you will inevitably forget like "this integration sends local time but doesn't adjust for daylight savings", and the value of holding these details will persist especially after all the tribal knowledge dissipates.
It's actually the inverse. Code is a liability, software is value. That's why we spent all our time splitting hairs to get more elegant ways to try and produce more software with less code.
I'm not so sure this is that black and white. Software without code is also a liability, I would avoid basing important system on software whose code you have no access to for example.
The data model in the code is what's important, but more important than that is the discovery and uncovered details that's happened while writing the code. That whole refactoring thing is just a sideshow to the underlying data model changing to better match the real world conditions the software has to run under.
> The design was in my head, and with hindsight I could see all its flaws. I was therefore able to create a much more efficient and effective design based on that learning. All the mistakes had been made, so I was able to get this version of the code right the first time.
I feel like you need to be careful with this. Ive seen a lot of times the rewrite ends up being more complicated then the original, even stuck in some rewrite hell and never takes off. I think its called the "2nd system effect"
I mean code ages quickly, so the value of software must include the skillset needed to support and maintain it. Which is why enterprise software contracts exist, and are expensive. You're not paying for the binary. You're paying for the team supporting it.
This is a very good point that has been proven over and over again in the industry. I recall being at Sun and having the argument over ONC and whether or not it should be "open" (which at the time meant everyone could get a copy of the code[1]) or "closed". Ed Zander was a big fan of keeping everything secret, after all anyone could reproduce it if they had the code right? And I used the same argument as the author, which is that if someone was a decent programmer and willing to invest the time, they could recreate it from scratch without the code so keeping the code secret merely slowed them down fractionally but letting our licensees read the code allowed them to better understand what worked and why and could release products that used it faster, which would contribute to its success in the marketplace.
I lost that battle and ONC+ was locked behind the wall until Open Solaris 20 years later. So many people in tech cannot (or perhaps will not) distinguish between "value" and "cost". Its like people who confuse "wealth" and "money". Closely related topics that are fundamentally talking about different things.
This is why you invest in people and expertise, not tools. Anyone can learn a new toolset, but only the people with expertise can create things of value.
[1] So still licensed, but you couldn't use the trademark if you didn't license it and of course there was no 'warranty' because of course the trademark required an interoperability test.
> if someone was a decent programmer and willing to invest the time, they could recreate it from scratch without the code
This is a stronger claim than the one in the article, which says it's easy for someone to recreate it if they were involved in building it in the first place.
It's not true that every piece of software is trivial to copy in a clean-room way, that is, only being able to observe its behaviour and not any implementation details.
There is a feeling that releasing the code is "giving it away for free". But, being able to compile and deploy it is not the whole story. Enterprises need support from the people who built the thing, and so without that it is not a very attractive proposition. It could be true in some scenarios though.
Microsoft doesn't open source Windows. A big enough company could fork it and offer enterprise support at a fraction of the cost. It would take them years to get there, and probably would be subpar to what large Windows customers get in support from Microsoft. Yes I know y'all hate dealing with Microsoft support - imagine that but worse. Still, the company with the forked distro would definitely take a bite out of Microsoft's Windows business, if only a small one.
> Still, the company with the forked distro would definitely take a bite out of Microsoft's Windows business, if only a small one.
That has not been shown to be the case. There is ample evidence that other companies would run this 'off market' or 'pirate' version, and zero evidence that if those choices had been unavailable that they would have legitimately licensed Windows.
You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue. If you "ask" for more than your product is "valued" then it won't be purchased but it may be stolen. And if you make it "impossible" to steal you will reduce its value to legitimate customers and have zero gain in revenue from those who had stolen it before (they still won't buy it).
The "value" in Windows is the number of things that run on it and the fact that compatibility issues are "bugs" which get fixed by the supplier. We are rapidly reaching the point where it will add value to have an operating system for AMD64 hardware that is overtly governed (not Linux or FOSS) which allows you to get a copy of the source when you license it, and has an application binary interface (ABI) that other software developers can count on to exist, not change out from under them, and last for 10+ years.
As Microsoft (and Apple) add more and more spurious features which enrich themselves and enrage their users the "value" becomes less and less. That calculus will flip and when it does enterprises will switch to the new operating system that is just an operating system and not a malware delivery platform.
> You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue.
That works for individuals. In many (most?) countries, the calculus for companies is vastly different. It takes one disgruntled employee or a bad dice roll to end up being audited for use of pirated software; between the regulators in many countries being on the side of the copyright holders on this, and the company itself being a much easier and juicer legal target than a bunch of regular people, the costs of getting caught using a bootleg Windows copy commercially far outweighs the costs of just licensing it.
With Windows also providing genuine value, the choice for companies isn't between licensing or pirating - it's between licensing, or sacrificing some other part of the business to scrounge up money for licensing, or not doing the business in the first place.
(Yes, the boundary between individuals and companies is fuzzy; this argument is somewhat weak for some classes of sole proprietorships, but generally solidifies quickly as the headcount of an org grows towards double-digit numbers.)
>>have an operating system for AMD64 hardware that is overtly governed (not Linux or FOSS)
Not understanding this part, aren't Linux distros achieving this already without licence restrictions and various levels of stability depending on the distro selected?
A huge amount of enterprise tooling is now being run on the cloud through the browser or via electron - for a large number of businesses, their staff would only need the equivalent of a Chromebook style GUI to perform their work.
Native software is still essential for a small % of users.. is this what you're suggesting needs to be solved? A single alternative open source system (OS or VM?) that the software dev company can target.
>Not understanding this part, aren't Linux distros achieving this already without licence restrictions and various levels of stability depending on the distro selected?
No. Ask yourself, if I install distro <pick one>, can I run a complex binary from 2015 on it? To pull of that kind of stunt you need to ensure you have control over changes not only in the kernel, but also in all of the associated user libraries and management tools. There are change paths for everything from how daemons get started to how graphics are rendered and sound is produced that are incompatible with themselves, much less other versions from 10 years ago. That is not a support burden that someone selling a specialized piece of software can easily take on. It makes their cost of development higher and so their price higher which loses them business.
> No. Ask yourself, if I install distro <pick one>, can I run a complex binary from 2015 on it?
Does a Go binary count? Half joking, but this is why “builds statically, only depends on syscalls” is making inroads. Also applied to static linking musl.
Yeah, if it uses the Win32 API!
Thanks to Wine, it’s the most stable API/ABI Linux has!
I’m kind of joking, but the main issue probably lies with the libc rather than with Linux itself.
I get that you're kind of joking but your right! Because nobody can "change" the Win32 ABI except Microsoft you don't get contributors pushing various "feature improvements" on it, not that there aren't a bunch of things one might do differently than the way the Win32 API does them, right? It's that externally enforced control that isn't possible with Linux/FOSS ecosystems. The 'why' of that is because people like Canonical can't afford to pay enough engineers to 'own' the whole system, and their user base gets bent out of shape when they do. It breaks the social contract that Linux has established.
The only way to change that is to start with a new social contract which is "You pay us to license a copy of this OS and we'll keep it compatible for all your apps that run on it."
While I sympathize with your need, I don't think we'll see a new OS fill this space.
Firstly, there's the obvious "all the apps you run on it". Your new OS has no apps, and even if a few emerged no business really wants to commit to running on a new OS with only a couple apps.
I mean, if you want a stable OS there's always BSD, or BeOS or whatever. Which we ignore because, you know, Windows. (And I know it's fun to complain about ads on windows and Microsoft in general, but there's a reason they own the market.) OH, and business users don't see the things folk complain about anyway.
Personally I have utilities on windows that were last compiled over 20 years ago that still run fine.
Secondly no OS operates in a vacuum. You need to store data, (database) browse the web, communicate, secure traffic and so on. Those are very dynamic. And again (by far) the most stable place to run those things is Windows. Like Postgres 9, from 15 years ago, is still used in production.
Of course it's also possible to freeze any OS and apps at any time and it will "run forever " - or at least until the version of TLS supports dies.
So no, I don't believe there will be a new OS. Windows Phone died because there were no apps. Your new OS will have the same problem.
> You are making a variant on the 'piracy losses' argument which has been shown is simply a pricing issue
An astute reader would find I am not in fact making that argument, and I suspect if we got into the weeds with it, we would find we agree with each other.
Couple months back, someone posted how they lost a days work due to hard drive crash and had to redo it. It took them roughly 30 minutes.
Their point was the same as this article with a shorter time window. Knowing what to do, not how to do it, is 90% of the battle.
But that is counterintuitive to the lay observer of software. They think they know what to do, because they’ve got ideas, but feel inhibited because they don’t yet know how to achieve them. So they assume that their immediate hurdle must be the hard part of software development.
That's a lack of financial sense. In finance, we learn about the time value of money. Code allows someone to go faster. This is literally a repository of time. So it is valuable.
When I look at coders, I sometimes get the sense of them being akin to financial anarchists. Maybe the word is harsh but how can they devalue themselves so much!?
That's beyond me. Nerds, you deserve better. (especially non US based ones)
> And I’d go further than that. I’d suggest that, contrary to what intuition might tell you, refactoring might be better achieved by throwing the code away and starting again.
I don't think this applies in most situations. If you have been part of the original core team and are rewriting the app in the same way, this might be true - basically a lost code situation, like the author was in.
However, if you are doing so because you lack understanding of the original code or you are switching the stack, you will inevitably find new obstacles and repeat mistakes that were fixed in the original prototype. Also, in a real world situation, you probably also have to handle fun things like data import/migration, upgrading production instances and serving customers (and possibly fixing bugs) while having your rewrite as a side project. I'm not saying that a rewrite is never the answer, but the authors situation was pretty unique.
Anyone truly considering this should weigh up this post with the timeless wisdom in Joel Spolsky's seminal piece, 'Things You Should Never Do'[1]. Rewriting from scratch can often be a very costly mistake. Granted, it's not as simple as "never do this" but it's not a decision one should make lightly.
1: https://www.joelonsoftware.com/2000/04/06/things-you-should-...
Fifteen years ago I agreed with his point. Today I do not.
The last rewrite I've seen completed (which was justified to a point as the previous system had some massive issues) took 3 years and burned down practically an entire org (multiple people left, some were managed out including two leads, the director was ejected after 18ish months) which was healthy-ish and productive before the rewrite. It's still causing operational pain and does not fully cover all edge cases.
I'm seeing another now in $current_job and I'm seeing similar symptoms (though the system being rewritten is far less important) and customers of the old system essentially abandoned to themselves and marketing and sales are scrambling to try to retain them.
Anecdotal experience is not so good. Rewriting a tiny component? Ok. Full on rewrite of a big system? I feel it's a bad idea and the wisdom holds true.
Spot on. It seems that OP is considering (1) a rewrite that can entirely fit into the mind of an engineerXYZ, and also (2) will be led by the same engineerXYZ, through executive empowerment.
I guess that in your case probably (1) did not hold. Or maybe (2) did not hold, or both.
OP's experiment doesn't prove at all, that an entire org can rewrite a complex app where 1&2 do not hold. Every indication we have is that org's executive functions perform abysmally for code writing (and rewriting). So exactly the point you are making. It would obviously mean that there is value in code, along the value in the org, once we get above the level of the value that conceptually fits into 1 head.
IMHO, anecdotally, if you attempt a full rewrite under the same organizational conditions that resulted in code bad enough to warrant it...
...you're gonna get bad code again, or, as you say, worse. The impact of the organizational culture dwarfs everything else.
I'm trying to make a small, efficient alternative to Pip. I could never realistically get there by starting with Pip and trimming it down, dropping dependencies, reworking the caching strategy etc. etc. But because I've studied Pip (and the problems it solves), I have a roadmap to taking advantage of a new caching strategy (incidentally similar to uv's), etc. - and I'll (probably) never have to introduce (most of) the heavyweight dependencies in the first place.
Understanding doesn't have to come from "being part of the original core team". Although if you aim to be feature-complete and interface-compatible, I'm sure it helps an awful lot.
You've hit on an important point in the article:
> if you are doing so because you lack understanding of the original code
As I understood it, the key point of the article is that the understanding is the value. If you don't understand the code, then you've lost the value. That's why rebuilds by new folk who don't understand the solution don't work.
Large sweeping software initiatives that go nowhere and are replaced by a product from a more agile team aspect isn't that unique, though the author being on both teams is.
Tell that to the 80% of the employees laid off after Musk bought Twitter.
The begs the question whether Musk's "non-code" managerial changes harmed the company's "value".
I think he did damage it, keeping in mind the difference between "no damage done" versus "damaged but partially repaired after a couple more years."
We got bought out a number of years ago. We'd been pretty liberal with our code up until this point as I'm sure many tiny companies are, but our new owners were very insistent on locking down "Intellectual Property" to exclusivly company controlled hardware locked behind SSO ready to lock anyone out at a moments notice...
You are putting a pretty basic CRUD app in Fort Knox. We're not building anything super proprietary or patentable, it's not rocket science. Anyone could rebuild something roughly analogous to our app in a matter of weeks.
The code isn't the value. Our connections, contracts and content are our value. Our people and our know how is the value.
The code is almost worthless on its own. The time and thus money we've spent has been far more in finding and fine tuning the user experience than in "writing code". These are things exposed to anyone who uses our app.
You could genuinely email all our code to our direct competitor and it wouldn't help them at all.
One step crazier: Companies that advertise a product and then lock down their API docs so that nobody can see them without being a current customer with need-to-know.
A different perspective is that there is a vast body of result-of-thought-and-experience associated with developed software. That is then lossily encoded in many forms. Memory, judgment, skill, team, contacts, customers, docs, test suite, other assorted software, etc. It's much easier to reimplement a language when you have a great language test suite. Easier to create a product if you already have a close relationship with its customers. Easier to implement something for the 3rd, 4th, 10th time. Etc. And the assorted forms have results they encode well and not so much, and assorted strengths and weaknesses as mechanism. Memories decay, and aren't great for details. Judgment transfers well; leaves. Teams shuffle. Tests become friction. Software works; ossifies.
Insightful synthesis around even a single form isn't exactly common. The art of managing test suites for instance. An insightful synthesis of many forms... I've not yet seen.
I had a similar feeling when I finished my freelance projects, that I would be difficult to replace, or that it would be easier to start from scratch than to try and decipher the system.
That's partly because I was being "too creative" — I love making things from scratch, but that's suboptimal from a business perspective fot several reasons. And partly because I didn't document the decisions very well (except in random comments).
So I had the feeling like most of the value was in my head, and a lot of work would have to be repeated with the next guy.
Aside from just inexperience and lack of professionalism on my part, there seems to be some tension here between what's good / enjoyable for me as a developer (making everything myself) vs what's good for the business (probably WordPress / PHP).
I have a problem that I test new UI frameworks on; generating celtic knotwork. I've written three of these now (React, VueJS, and Go manipulating SVGs). The first was hard because I had to learn how to solve the problem. The others were hard because I had to learn how that solution changed because of the framework.
There's a joy in rewriting software, it is obviously better the second time around. As the author says, the mistakes become apparent in hindsight and only by throwing it all away can we really see how to do it better.
I also sketch (badly) and the same is true there; sketching the same scene multiple times is the easiest way of improving and getting better.
I would love to see your Celtic knotwork generators, are they online?
From my little experience I find this article's point to be true. I've been in a few organisations where the flow of people (students) is constant, so experience constantly leaves and mistakes get repeated (we haven't figured out a proper knowledge base). All one can do is try their best to transfer the experience before graduating, but taught lessons don't stick as deeply as firsthand learnings (with the associated toil...)
Hmm. We've often discussed AI as programmer codegen tool, and as vibe coder. But there have been other roles over the decades associated with programming. Perhaps AI could serve as a team Librarian? Historian? Backup Programmer (check-and-balance to a programmer)? A kibitzing greybeard institutional memory? Team Lead for humans? Coach/Mentor? Something else? Mob programming participant?
The software development company that I worked at for 20 years, made a specialised practice management system that was (at the time) years ahead of the competition - a real Windows experience with a real database, where the competition was all DOS based or using Access. At one point, they sold the rights to develop their software in a particular country to another company (they had no plans to enter that market, and were a bit strapped for cash at the time). So the other company got the source code, and - in the spirit of the time - insisted it was printed out on paper too! The other company never managed to do anything with it in the end - having the source code for the entire product was not enough!
> The design was in my head, and with hindsight I could see all its flaws. I was therefore able to [re]create a much more efficient and effective design based on that learning. All the mistakes had been made, so I was able to get this version of the code right the first time.
Notice the critically important difference of recreating an existing design, vs using the rewrite as an opportunity to experiment on the design and the implementation (and the language, and the ...).
Vetting a new design takes time, consensus, and subjective judgement. Re-implementing an existing design is laser focused and objective.
While people and organizational knowledge are important, I have to disagree with the article. Code has value, tremendous value in fact. It’s the only record of truth of a software product. The code of a working product records the decisions, the designs, solved problems and solved mistakes during the development. Software development is not just writing the code. The code is the end product of the development process which can be long and arduous. Yes. It can be reproduced with skill, time and money, but it can be prohibitively expensive. Thus lays the value of the code.
Edit: case in point, Sybase created the SQL Server. During one of the due diligence of business partnerships with Microsoft, Microsoft “borrowed” a copy of the source code (not sure about the details). After much legal wrangling, Sybase was forced to license it to Microsoft due to the loss of leverage. Microsoft released it as MS SQL Server. It took years and years of work for Microsoft to finally replace the code piece by piece.
>The code of a working product records the decisions, the designs, solved problems and solved mistakes during the development.
Our experiences apparently differ. I've worked on dozens of large scale systems and due to the lack of up to date documentation and comments in the code the developers have had to re-engineer most of those details in order to make even minor changes as the requirements evolve over the years. The code might work, but the knowledge of how and why is generally lost to entropy.
Yes sure, the code might be unreadable but it’s the working copy that any changes based on and run against. Throwing it away and recreating the changes in vacuum would be very difficult.
“All the value is stored up in the team, the logic and the design, and very little of it is in the code itself.”
This is a key reason it’s so important to knowledge-share within teams. For all kinds of reasons, people move on, and any knowledge they hold without redundancy becomes unavailable.
Also a good reason why commenting can help: then maybe a bit more of the value IS in the code.
Not sure if the author can share the code. For claims like this, I'd like to see source code. I think a lot hinges on the details.
> All the value is stored up in the team, the logic and the design, and very little of it is in the code itself.”
I regularly use a device with firmware that is probably dated around 1989.
Yes, the value is the code on that EPROM chip. The team is long gone.
This article's author has never used a program that was well developed, documented and finished, so that all that is left it to use it.
Or any other great piece of engineering that was done and dusted, and didn't need a perpetual team to prop it up.
The value is in keeping the code running. Unless you are doing something very complex from scratch, you are mostly writing code for hire or for your startup. Nowadays, that code does not really take that long to develop. You will be wrapping, mixing, and orchestrating several paid, free, and OSS APIs and frameworks to work together with your solution to the problem. This will take six to eighteen months to complete. Then the value starts. That code is making you or someone else money. If you stop efficiently and correctly managing the code, someone loses money.
> web portal I was involved in developing as part of an all remote team back just before the turn of the millennium. > I can conclude that of the 6 months of time spent by 7 people creating this solution, hardly any of it related to the code. It could be completely discarded and rebuilt by one person in under two weeks.
I bet he did it recently and that undermines his whole thesis. He would need to have redeveloped it before 2000, to support his argument. I would also suspect he only made a toy 80% working example and that it only needed the other 90% to be completed (e.g. administrative or developer focused features). I'm pattern matching with other developers I've heard say similar things.
Information that articles ignore is often critical; moreover we judge articles based on the meta-decision of "what critical information was ignored". The article severely misses some key points.
A better example that a developer is more valuable than the code: when a key member of a company goes off and greenfield develops a competitor that wins (but still not an independent measure due to confoundering effects).
In some situations I would agree with the thesis, but unfortunately the article poorly argues the point.
I can say that in learning go several years ago, rewriting a ~10k LOC app that I had done in python, I definitely learnt some new paradigms in go that allowed me new perspective in what I had done in python.
I would have gone back to fix my python code, but I'm happy with the rewrite in go (runs faster, has far more test cases, which allowed me to add more functionality easier)
And yes, the rewrite took me ~50% of the time, and most of that was due to it being an exercise in learning go (including various go footguns).
"It’s a scary thought. You might even consider it ridiculously farfetched. I wouldn’t expect you to agree with me based on a blog post. However, what I would recommend is that you give it some serious thought, and maybe conduct a similar experiment of your own. If you do, let me know how you get on. I’d be genuinely interested to find out." there are no comments on your blog so i am not sure i can express any of my opinions there
I want to hear the perspective of someone who lost 100kloc and had to rewrite it all!
There is a lot of value in code. It works in prod. It is continuously regression tested by its load, so when there is a problem you figure out a tiny delta to fix it.
If you rewrite from memories you'll get a lot of bugs you need to fix again.
Code being worthless and "must keep PRs small" seem to be in tension.
Another big piece I would add to this is the processes that enable organizations to ship code. Not just processes that are directly related to the product and code, but other organizational processes like hiring, sales, support, etc.
Efficient processes require a lot of thought to develop and implement. When a badly-run organization acquires a good piece of code, it will eventually start to stagnate and bloat.
> finally, there’s the code. That also takes time, but that time is small in comparison to all the others ... The developer’s answer to all of this is “refactoring”. For those of you who don’t code, refactoring is ... This takes time.
Yeah, please never manage a software team. Thanks.
"Programming as Theory Building" 1985 by Peter Naur makes the same case and works out some of the implications. One of my favorite computer engineering papers.
https://pages.cs.wisc.edu/~remzi/Naur.pdf
Paper discussed multiple times on HN, notably:
https://news.ycombinator.com/item?id=10833278
I very much agree with this. Find a job where you can meaningfully contribute to a product that is important to your company (generates a meaningful amount of revenue), and after a little while the natural thing that happens is you become irreplaceable because of that retained value in your head.
It doesn’t have to happen, and with some effort can be somewhat avoided, but it’s the default outcome. Depending on your goals and career aspirations, this can be a wonderful thing, or it can be a bit of a curse.
> or it can be a bit of a curse.
Yeah, sorry arscan, you are too important to take a two week vacation.
> No. I achieved this because the code contained very little of the real value. That was all stored in my head. The design was in my head, and with hindsight I could see all its flaws. I was therefore able to create a much more efficient and effective design based on that learning.
Exactly. Code is cheap to write. Even a lot of it. What's hard is understanding a problem thoroughly enough to model a correct solution. Once you have that, you've done 90% of the work.
This is the classic, very domain specific wisdom that does not extrapolate to all software. Some codebases are narrow in focus and rely on solving a problem in a smart way. Other applications just need to hold thousands of data points that come from variety of sources. Here the code holds tens of thousands of priceless details that you will inevitably forget like "this integration sends local time but doesn't adjust for daylight savings", and the value of holding these details will persist especially after all the tribal knowledge dissipates.
This is what people mean when they say build the first version to throw away.
And often the second :)
The code is valuable insofar as it maps to the real world.
what if you're coding a fantasy video game?
Who's time is money^n where n varies by effectiveness. It's not simply a linear relationship. And building a team can make the n's increase together.
Mostly I just keep seeing all the references to "ajax" in the title image and wondering how old that stock photo is.
Corollary: The value is in vibe coding. AKA - knowing how to prompt well.
It's actually the inverse. Code is a liability, software is value. That's why we spent all our time splitting hairs to get more elegant ways to try and produce more software with less code.
I'm not so sure this is that black and white. Software without code is also a liability, I would avoid basing important system on software whose code you have no access to for example.
If you lose your credit card you still have to make payments
And perhaps even payments for someone who took your credit card and went on a shopping spree.
The data model in the code is what's important, but more important than that is the discovery and uncovered details that's happened while writing the code. That whole refactoring thing is just a sideshow to the underlying data model changing to better match the real world conditions the software has to run under.
It's a nice thought to have for when I'm mentally dooming about vibe coding killing my job and entire field.
Absolute nonsense. There is little division between the ideas and the code. Saying they're completely different isn't just myopic, it's dangerous.
> The design was in my head, and with hindsight I could see all its flaws. I was therefore able to create a much more efficient and effective design based on that learning. All the mistakes had been made, so I was able to get this version of the code right the first time.
I feel like you need to be careful with this. Ive seen a lot of times the rewrite ends up being more complicated then the original, even stuck in some rewrite hell and never takes off. I think its called the "2nd system effect"
I mean code ages quickly, so the value of software must include the skillset needed to support and maintain it. Which is why enterprise software contracts exist, and are expensive. You're not paying for the binary. You're paying for the team supporting it.
[dead]