“None of the safe programming languages existed for the first 10 years of SQLite's existence. SQLite could be recoded in Go or Rust, but doing so would probably introduce far more bugs than would be fixed, and it may also result in slower code.”
Modern languages might do more than C to prevent programmers from writing buggy code, but if you already have bug-free code due to massive time, attention, and testing, and the rate of change is low (or zero), it doesn’t really matter what the language is. SQLIte could be assembly language for all it would matter.
This jives with a point that the Google Security Blog made last year: "The [memory safety] problem is overwhelmingly with new code...Code matures and gets safer with time."
Heartbleed was a great demonstration of critical systems that were under appreciated.
Too few maintainers, too security researchers and too little funding.
When writing systems as complicated and as sensitive as the leading encryption suite used globally, no language choice will save you from under resourcing.
Agreed. I rather dislike the idea of "safe" coding languages. Fighting with a memory leak in an elixir app, for the past week. I never viewed c or c++ as unsafe. Writing code is hard, always has been, always will be. It is never safe.
Safe code is just code that cannot have Undefined Behavior. C and C++ have the concept of "soundness" just like Rust, just no way to statically guard against it.
"SQLite could be recoded in Go or Rust, but doing so would probably introduce far more bugs than would be fixed, and it may also result in slower code."
We will see. On the Rust side there is Turso which is pretty active.
I think beyond the historical reasons why C was the best choice when SQLite was being developed, or the advantages it has today, there's also just no reason to rewrite SQLite in another language.
We don't have to have one implementation of a lightweight SQL database. You can go out right now and start your own implementation in Rust or C++ or Go or Lisp or whatever you like! You can even make compatible APIs for it so that it can be a drop-in replacement for SQLite! No one can stop you! You don't need permission!
But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?
> But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?
Because a lot of language advocacy has degraded to telling others what you want them to do instead of showing by example what to do. The idea behind this is that language adoption is some kind of zero sum game. If you're developing project 'x' in language 'y' then you are by definition not developing it in language 'z'. This reduces the stature of language 'z' and the continued existence of project 'x' in spite of not being written in language 'z' makes people wonder if language 'z' is actually as much of a necessity as its proponents claim. And never mind the fact that if the decision in what language 'x' would be written were to be revisited by the authors of 'x' that not only language 'z' would be on the menu, but also languages 'd', 'l', 'j' and 'g'.
Given the common retort for why not try X project in Y new language is "it's barely used in other things. Let's wait and see it get industry adoption before trying it out" it's hard to see it as anything OTHER than a zero-sum game. As much as I like Rust I recognize some things like SQLite are better off in C. But the reason you see so much push for some new languages is because if they don't get and maintain regualr adoption, they will die off.
Yeah.. I always remind myself of the netscape browser. A lesson in "if it's working to mess with it"
My question is always the reverse. Why try it in Y new language. Is there some feature that Y provides that was missing in X? How often do those features come up.
Company I worked for decided to build out a new microservice in language Y. The whole company was writing in W and X, but they decided to write the new service in Y. When something goes wrong, or a bug needs fixing, 3 people in the company of over 100 devs know Y. Guess what management is doing.. Re-writing it in X.
One good reason is that people have written golang adapters, so that you can use sqlite databases without cgo.
I agree to what I think you're saying which is that "sqlite" has, to some degree, become so ubiquitous that it's evolved beyond a single implementation.
We, of course, have sqlite the C library but there is also sqlite the database file format and there is no reason we can't have an sqlite implementation in golang (we already do) and one in pure rust too.
I imagine that in the future that will happen (pure rust implementation) and that perhaps at some point much further in the future, that may even become the dominant implementation.
Thanks for this, I fully agree. One frustration I have with the modern moment is the tendency to view anything more than five years old with disdain, as utterly irrelevant and obsolete. Maybe I’m just going old, but I like my technology dependable and boring, especially software. Glad to see someone express respect for the decades of expertise that have gone into things we take for granted.
> Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
Huh it's not everyday that I hear a genuinely new argument. Thanks for sharing.
I guess I don’t find that argument very compelling. If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.
This feels like chasing arbitrary 100% test coverage at the expense of safety. The code quality isn’t actually improved by omitting the checks even though it makes testing coverage go up.
In safety critical spaces you need to be able to trace any piece of a binary back to code back to requirements. If a piece of running code is implicit in code, it makes that traceability back to requirements harder. But I'd be surprised if things like bounds checks are really a problem for that kind of analysis.
Yeah sounds too clever by half, memory safe languages are less safe because they have bounds checks...maybe I could see it on a space shuttle? Well, only in the most CYA scenarios, I'd imagine.
Critical applications like that used to use ADA to get much more sophisticated checking than just bounds. No certified engineer would (should) ever design a safety critical system without multiple “unreachable” fail safe mechanisms
Next they’ll have to tell me about how they had to turn off inlining because it creates copies of code which adds some dead branches. Bounds checks are just normal inlined code. Any bounds checked language worth its salt has that coverage for all that stuff already.
> If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.
I don't think I would (personally) ever be comfortable asserting that a code branch in the machine instructions emitted by a compiler can't ever be taken, no matter what, with 100% confidence, during a large fraction of situations in realistic application or library development, as to do so would require a type system powerful enough to express such an invariant, and in that case, surely the compiler would not emit the branch code in the first place.
One exception might be the presence of some external formal verification scheme which certifies that the branch code can't ever be executed, which is presumably what the article authors are gesturing towards in item D on their list of preconditions.
The argument here is that they're confident that the bounds check isn't needed, and would prefer the compiler not insert one.
The choices therefore are:
1. No bound check
2. Bounds check inserted, but that branch isn't covered by tests
3. Bounds check inserted, and that branch is covered by tests
I'm skeptical of the claim that if (3) is infeasible then the next best option is (1)
Because if it is indeed an impossible scenario, then the lack of coverage shouldn't matter.
If it's not an impossible scenario then you have an untested case with option (1) - you've overrun the bounds of an array, which may not be a branch in the code but is definitely a different behaviour than the one you tested.
If a code branch can't be ever taken. Doesn't that mean you do not need it? Basically it must be code that will not get executed. So leaving it out does not matter.
If you then can come up a scenario where you need it. Well in fully tested code you do need to test it.
There is a whole 'nother level of safety validation that goes beyond your everyday OWASP, or heck even what we consider "highly regulated" industry requirements that 95-99% of us devs care about. SQLite is used in some highly specialized, highly sensitive environments, where they are concerned about bit flips, and corrupted memory. I had the luxury of sitting through Richard Hipp's talk about it one time, but I am certainly butchering it.
So is the argument that safe langs produce stuff like:
// pseudocode
if (i >= array_length) panic("index out of bounds")
that are never actually run if the code is correct? But (if I understand correctly) these are checks implicitly added by the compiler. So the objection amounts to questioning the correctness of this auto-generated code, and is predicated upon mistrusting the correctness of the compiler? But presumably the Rust compiler itself would have thorough tests that these kinds of checks work?
Someone please correct me if I'm misunderstanding the argument.
One of the things that SQLite is explicitly designed to do is have predictable behavior in a lot of conditions that shouldn't happen. One of those predictable behavior is that it does its best to stay up and running, and continuing to do the best it can. Conditions where it should succeed in doing this include OOM, the possibility of corrupted data files, and (if possible) misbehaving CPUs.
Automatic array bounds checks can get hit by corrupted data. Thereby leading to a crash of exactly the kind that SQLite tries to avoid. With complete branch testing, they can guarantee that the test suite includes every kind of corruption that might hit an array bounds check, and guarantee that none of them panic. But if the compiler is inserting branches that are supposed to be inaccessible, you can't do complete branch testing. So now how do you know that you have tested every code branch that might be reached from corrupted data?
Furthermore those unused branches are there as footguns which are reachable with a cosmic ray bit flip, or a dodgy CPU. Which again undermines the principle of keeping running if at all possible.
In rust at least you are free to access an array via .get which returns an option and avoids the “compiler inserted branch” (which isn’t compiler inserted by the way - [] access just implicitly calls unwrap on .get and sometimes the compiler isn’t able to elide).
Also you rarely need to actually access by index - you could just access using functional methods on .iter() which avoids the bounds check problem in the first place.
I had Vec in mind but regardless nothing forces you to use the bounds-checked variant vs one that returns option<t>. And if you really are sure the bounds hold you can always use the assume crate or just unwrap_unchecked explicitly.
Keeping running if possible doesn't sound like the best strategy for stability. If data was corrupted in memory in a was that would cause a bounds check to fail then carrying on is likely to corrupt more data. Panic, dump a log, let a supervisor program deal with the next step, or a human, but don't keep going potentially persisting corrupted data.
What the best strategy is depends on your use case.
The use case that SQLite has chosen to optimize for is critical embedded software. As described in https://www.sqlite.org/qmplan.html, the standard that they base their efforts on is a certification for use in aircraft. If mission critical software on a plane is allowed to crash, this can render the controls inoperable. Which is likely to lead to a very literal crash some time later.
The result is software that has been optimized to do the right thing if at all possible, and to degrade gracefully if that is not possible.
Note that the open source version of SQLite is not certified for use in aviation. But there are versions out there that have been certified. (The difference is a ton of extra documentation.) And in fact SQLite is in use by Airbus. Though the details of what exactly for are not, as far as I know, public.
If this documented behavior is not what you want for your use case, then you should consider using another database. Though, honestly, no other database comes remotely close when it comes to software quality. And therefore I doubt that "degrade as documented rather than crash" is a good reason to avoid SQLite. (There are lots of other potential reasons for choosing another database.)
outside political definitions, I'm not sure "crash and restart with a supervisor" and "don't crash" are meaningfully different? they're both error-handling tactics, likely perfectly translatable to each other, and Erlang stands as an existence proof that crashing is a reasonable strategy in extremely reliable software.
I fully recognize that political definitions drive purchases, so it's meaningful to a project either way. but that doesn't make it a valid technical argument.
It still needs to detect that there is corrupted data, dump the log and the supervisor would not be the best if it was external since in some runtimes it could be missing, they just build it into it and we came full circle.
I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior. If you haven’t tested a given control flow, the issue is that it’s possible that the end result is some indeterminate or invalid state for the whole program, not that the given bounds check doesn’t panic the way it’s supposed to. On embedded for example (which is an important usecase for SQLite) this could result in orphaned or broken resources.
> I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior.
The way I was thinking about it was: if you somehow magically knew that nothing added by the compiler could ever cause a problem, it would be redundant to test those branches. Then wondering why a really well tested compiler wouldn't be equivalent to that. It sounds like the answer is, for the level of soundness sqlite is aspiring to, you can't make those assumptions.
But does it matter if that control flow is unreachable?
If the check never fails, it is logically equivalent to not having the check. If the code isn't "correct" and the panic is reached, then the equivalent c code would have undefined behavior, which can be much worse than a panic.
> But (if I understand correctly) these are checks implicitly added by the compiler.
This is a dubious statement. In Rust, the array indexing operator arr[i] is syntactic sugar for calling the function arr.index(i), and the implementation of this function on the standard library's array types is documented to perform a bounds-check assertion and access the element.
So the checks aren't really implicitly added -- you explicitly called a function that performs a bounds check. If you want different behavior, you can call a different, slightly-less-ergonomic indexing function, such as `get` (which returns an Option, making your code responsible for handling the failure case) or `get_unchecked` (which requires an unsafe block and exhibits UB if the index is out of bounds, like C).
> questioning the correctness of this auto-generated code
I wouldn't put it that way. Usually when we say the compiler is "incorrect", we mean that it's generating code that breaks the observable behavior of some program. In that sense, adding extra checks that can't actually fail isn't a correctness issue; it's just an efficiency issue. I'd usually say the compiler is being "conservative" or "defensive". However, the "100% branch testing" strategy that we're talking about makes this more complicated, because this branch-that's-never-taken actually is observable, not to the program itself but to its test suite.
it's ignoring that many of such checks get reliably optimized away
worse it's a bit like saying "in case of a broken invariant I prefer arbitrary potential highly problematic behavior over clean aborts (or errors) because my test tooling is inadequate"
instead of saying "we haven't found adequate test tooling" for our use case
Why inadequate? Because technically test setups can use
1. fault injection to test such branches even if normally you would never hit them
2. for many of such tests (especially array bound checks) you can pretty reliably identify them and then remove them from your test coverage statistic
idk. what the tooling of rust wrt this is in 2025, but around the rust 1.0 times you mainly had C tooling you applied to rust so you had problems like that back then.
Ok, but you can still test all the branches in your source code and have 100% coverage. Those additional `if` branches are added by the compiler. You are responsible for testing the code you write, not the one that actually runs. Your compiler's test suite is responsible for the rest.
By the same logic one could also claim that tail recursion optimisation, or loop unrolling are also dangerous because they change the way code works, and your tests don't cover the final output.
If they produce control flow _in the executable binary_ that is untested, then they could conceivably lead to broken states. I don’t believe most of those sorts of transformations cause alternative control flows to be added to the executable binary.
I don’t think anyone would find the idea compelling that “you are only responsible for the code you write, not the code that actually runs” if the code that actually runs causes unexpected invalid behavior on millions of mobile devices.
>You are responsible for testing the code you write, not the one that actually runs.
Hipp worked as a military contractor for battleships, furthermore years later SQLite was under contract under every proto-smartphone company in the USA. Under these constraints you maybe are not responsible to test what the compiler spits out across platforms and different compilers, but doing that makes the project a lot more reliable, makes it sexier for embedded and weapons.
I don't see anything wrong with taking responsibility for the code that actually runs. I would argue it's that level of accountability has played a part in Sqlite being such a great project.
It's the sort of argument that I wouldn't accept from most people and most projects, but from Dr Hipp isn't most people and Sqlite isn't most projects.
Certainly don't get me wrong, SQLite is one of the best and most thoroughly tested libraries out there. But this was an argument to have 4 arguments. That's because 2 of the arguments break down as "Those languages didn't exist when we first wrote SQLite and we aren't going to rewrite the whole library just because a new language came around."
Any language, including C, will emit or not emit instructions that are "invisible" to the author. For example, whenever the C compiler decides it can autovectorize a section of a function it'll be introducing a complicated set of SIMD instructions and new invisible branch tests. That can also happen if the C compiler decides to unroll a loop for whatever reason.
The entire point of compilers and their optimizations is to emit instructions which keep the semantic intent of higher level code. That includes excluding branches, adding new branches, or creating complex lookup tables if the compiler believes it'll make things faster.
Dr Hipp is completely correct in rejecting Rust for SQLite. Sqlite is already written and extremely well tested. Switching over to a new language now would almost certainly introduce new bugs that don't currently exist as it'd inevitably need to be changed to remain "safe".
If it was as completely tested as claimed, then switching to rust would be trivial. All you need to do is pass the test suite and all bugs would be gone. I can think of other reasons not to jump to rust (it is a lot of code, sqlite already works well, and test coverage is very good but also incomplete, and rust only solves a few correctness problems)—just not because of claiming sqlite is already tested enough to be bug free of the kinds of issues that rust might actually prevent.
no, you still need to rewrite, re-optimize, etc. everything
it would make it much easier to be fully compatible, sure, but that doesn't make it trivial
furthermore part of it's (mostly internal) design are strongly influenced by C specific dev-UX aspects, so you wouldn't write them the same, so test for them (instead of integration tests) may not apply
which in general also means that you most likely would break some special purpose/usual user which do have "brittle" (not guaranteed) assumptions about SQLite
if you have code which very little if at all changes and has no major issues, don't rewrite it
but most of the new "external" things written around SQLite, alternative VFS impl. etc. tend to be at most partially written in C
I wonder if this problem could be mitigated by not requiring coverage of branches that unconditionally lead to panics. or if there could be some kind of marking on those branches that indicate that they should never occur in correct code
Yes. You have to write `unsafe { ... }` around it, so there's an ergonomic penalty plus a more nebulous "sense that you're doing something dangerous that might get some skeptical looks in code review" penalty, but the resulting assembly will be the same as indexing in C.
I figured, but I guess I don't understand this argument then. SQLite as a project already spends a lot of time on quality so doing some `unsafe` blocks with a `// SAFETY:` comment doesn't seem unreasonable if they want to avoid the compiler inserting a panic branch for bounds checks.
I think those branches are often not there because it's provably never going out of bounds. There are ways to ensure the compiler knows the bounds cannot be broken.
It's interesting to consider (and the whole page is very well-reasoned), but I don't think that the argument holds up to scrutiny. If such an automatic bounds-check fails, then the program would have exhibited undefined behavior without that branch -- and UB is strictly worse than an unreachable branch that does something well-specified like aborting.
A simple array access in C:
arr[i] = 123;
...can be thought of as being equivalent to:
if (i >= array_length) UB();
else arr[i] = 123;
where the "UB" function can do literally anything. From the perspective of exhaustively testing and formally verifying software, I'd rather have the safe-language equivalent:
if (i >= array_length) panic();
else arr[i] = 123;
...because at least I can reason about what happens if the supposedly-unreachable condition occurs.
Dr. Hipp mentions that "Recoding SQLite in Go is unlikely since Go hates assert()", implying that SQLite makes use of assert statements to guard against unreachable conditions. Surely his testing infrastructure must have some way of exempting unreachable assert branches -- so why can't bounds checks (that do nothing but assert undefined behavior does not occur) be treated in the same way?
The 100% branch testing is on the compiled binary. To exempt unreachable assert branches, turn off assertions, compile, and test.
A more complex C program can have index range checking at a different place than the simple array access. The compiler's flow analysis isn't always able to confirm that the index is guaranteed to be checked. If it therefore adds a cautionary (and unneeded) range check, then this code branch can never be exercised, making the code no longer 100% branch tested.
you basically say if deeply unexpected things happen you prefer you program doing widely arbitrary and as such potentially dangerous things over it having a clean abort or proper error. ... that doesn't seem right
worse it's due to a lack of the used tooling and not a fundamental problem, not only can you test this branches (using fault injection) you also often (not always) can separate them from relevant branches when collecting the branch statistics
so the while argument misses the point (which is tooling is lacking, not extra checks for array bounds and similar)
lastly array bounds checking is probably the worst example they could have given as it
- often can be disabled/omitted in optimized builds
- is quite often optimized away
- has often quite low perf. overhead
- bound check branches are often very easy to identify, i.e. excluding them from a 100% branch testing statistic is viable
- out of bounds read/write are some of the most common cases of memory unsafety leading to security vulnerability (including full RCE cases)
> In incorrect code, the branches are taken, but code without the branches just behaves unpredictably.
It's like seat belts.
E.g. what if we drive four blocks and then the case occurs when the seatbelt is needed need the seatbelt? Okay, we have an explicit test for that.
But we cannot test everything. We have not tested what happens if we drive four blocks, and then take a right turn, and hit something half a block later.
Screw it, just remove the seatbelts and not have this insane untested space whereby we are never sure whether the seat belt will work properly and prevent injury!
> All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include:
- Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
- Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
- Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
- Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
- Rust needs a mechanism to recover gracefully from OOM errors.
- Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
1. Rust has had ten years since 1.0. It changes in backward compatible ways. For some people, they want no changes at all, so it’s important to nail down which sense is meant.
2. This has been demonstrated.
3. This one hinges on your definition of “obscure,” but the “without an operating system” bit is unambiguously demonstrated.
4. I am not an expert here, but given that you’re testing binaries, I’m not sure what is Rust specific. I know the Ferrocene folks have done some of this work, but I don’t know the current state of things.
5. Rust as a language does no allocation. This OOM behavior is the standard library, of which you’re not using in these embedded cases anyway. There, you’re free to do whatever you’d like, as it’s all just library code.
6. This also hinges on a lot of definitions, so it could be argued either way.
ironically if we look at how things play out in practice rust is far more suited as general purpose languages then C, to a point where I would argue C is only a general purpose language on technicality not on practical IRL basis
this is especially ridiculous when they argue C is the fasted general purpose language when that has proven to simply not hold up to larger IRL projects (i.e. not micro benchmarks)
C has terrible UX for generic code re-use and memory management, this often means that in IRL projects people don't write the fasted code. Wrt. memory management it's not rare to see unnecessary clones, as not doing so it to easy to lead to bugs. Wrt. data structures you write the code which is maintainable, robust and fast enough and sometimes add the 10th maximal simple reimplementation (or C macro or similar) of some data structure instead of using reusing some data structures people spend years of fine tuning.
When people switched a lot from C to C++ most general purpose projects got faster, not slower. And even for the C++ to Rust case it's not rare that companies end up with faster projects after the switch.
Both C++ and Rust also allow more optimization in general.
So C is only fastest in micro benchmarks after excluding stuff like fortran for not being general purpose while itself not really being used much anymore for general purpose projects...
I think Rust (and C++) are just too complicated and visually ugly, and ultimately that hurts the maintainability of the code. C is simple, universal, and arguably beautiful to look at.
The lack of dependency hell is a bit of an illusion when it comes to C. What other languages solve via library most C projects will reimplement themselves, which of course increases the chance for bugs.
But is optional. For this kind of project, is logical to adopt something like the tiger battle ethos and own all the code and have no external deps (or vendor them). Even do
your own std if wanna.
Is hard work? But is not that different from what you see in certain C projects that neither use external deps
Then don't? In C you would just implement everything yourself, so go do that in Rust if you don't want dependencies.
In C I've seen more half-baked json implementations than I can count on my fingers because using dependencies is too cumbersome in that ecosystem and people just write it themselves but most of the time with more bugs.
One question towards maturity: has any working version of the Rust compiler ever existed? By which I mean one that successfully upholds the memory-safety guarantees Rust is supposed to make, and does not have any "soundness holes" (which IIRC were historically used as a blank check / excuse to break backwards compatibility).
The current version of the Rust compiler definitely doesn't -- there's known issues like https://github.com/rust-lang/rust/issues/57893 -- but maybe there's some historical version from before the features that caused those problems were introduced.
In general, the way Rust blurs the line between "bugs in the compiler" and "problems with how the language is designed" seems pretty harmful and misleading. But it's also a core part of the marketing strategy, so...
You are correct that Rust's marketing does not claim that there are no bugs in its compiler. In fact it does the opposite: it suggests that there are no problems with the language, by asserting that any observed issue in the language is actually a bug in the compiler.
Like, in the C world, there's a difference between "the C specification has problems" and "GCC incorrectly implements the C specification". You can make statements about what "the C language" does or doesn't guarantee independently of any specific implementation.
But "the Rust language" is not a specification. It's just a vague ideal of things the Rust team is hoping their compiler will be able to achieve. And so "the Rust language" gets marketed as e.g. having a type system that guarantees memory safety, when in fact no such type system has been designed -- the best we have is a compiler with a bunch of soundness holes. And even if there's some fundamental issue with how traits work that hasn't been resolved for six years, that can get brushed off as merely a compiler bug.
This propagates down into things like Rust's claims about backwards compatibility. Rust is only backwards-compatible if your programs are written in the vague-ideal "Rust language". The Rust compiler, the thing that actually exists in the real world, has made a lot of backwards-incompatible changes. But these are by definition just bugfixes, because there is no such thing as a design issue in "the Rust language", and so "the Rust language" can maintain its unbroken record of backwards-compatibility.
> And even if there's some fundamental issue with how traits work that hasn't been resolved for six years, that can get brushed off as merely a compiler bug.
Is it getting brushed off as merely a compiler bug? At least if I'm thinking of the same bug as you [0] the discussion there seems to be more along the lines of the devs treating it as a "proper" language issue, not a compiler bug. At least as far as I can tell there hasn't been a resolution to the design issue, let alone any work towards implementing a fix in the compiler.
The soundness issue that I see more frequently get "brushed off as merely a compiler bug" is the lifetime variance one underpinning cve-rs [1], which IIRC the devs have long decided what the proper behavior should be but actually implementing said behavior is blocked behind some major compiler reworks.
> has made a lot of backwards-incompatible changes
Not sure I've seen much evidence for "a lot" of compatibility breaks outside of the edition system. Perhaps I'm just particularly (un)lucky?
> because there is no such thing as a design issue in "the Rust language"
I'm not sure any of the Rust devs would agree? Have any of them made a claim along those lines?
The Rust team may see this as a language design issue internally, and I'd be inclined to agree. Rust's outward-facing marketing does not reflect this view.
> I linked the same bug you did in the comment that that's a reply to
Ah, my apologies. Not sure exactly how I managed to miss that.
That being said, I guess I might have read that bit of your comment different than you had in mind; I was thinking of whether the Rust devs were dismissing language design issues as compiler bugs, not what third parties (albeit one with an unusually relevant history in this case) may think.
> Rust's outward-facing marketing does not reflect this view.
As above, perhaps I interpret the phrase "outward-facing marketing" differently than you do. I typically associate that (and "marketing" in general, in this context) with more official channels, whether that's official posts or posts by active devs in an official capacity.
Oh, I didn't realize steveklabnik wasn't an official member of the project anymore (as of 2022 apparently: https://blog.rust-lang.org/2022/01/31/changes-in-the-core-te... ). I do think he still expressed this position back when he was a major public face of the language, but it seems unfair to single him out and dig through his comment history.
Rust's marketing is pretty grassroots in general, but even current official sources like https://rust-lang.org/ say things like "Rust’s rich type system and ownership model guarantee memory-safety" that are only true of the vague-ideal "Rust language" and are not true of the type system they actually designed and implemented in the Rust compiler.
Rust insists on its own package manager "rustup" and frowns on distro maintainers. When Rust is happy to just be packaged by the distro and rustup has gone away, then it will have matured to at least adolescence.
the rust version packaged in distros is for compiling rust code shipped as part of the distro. This means it
- is normally not the newest version (which , to be clear, is not bad per see, but not necessary what you need)
- might not have all optional components (e.g. no clippy)
but if you idk. write a server deployed by you company
- you likely want all components
- you don't need to care what version the distro pinned
- you have little reason not to use the latest rust compiler
for other use cases you have other reasons, some need nightly rust, some want to test against beta releases, some want to be able to test against different rust versions etc. etc.
rustup exist (today) for the same reason why a lot of dev projects use project specific copies of all kinds of tooling and libraries which do not match whatever their distro ships: The distro use-case and generic dev-use case have diverging requirements! (Other examples nvm(node), flutter, java etc.).
Also some distros are notorious for shipping outdated software (debian "stable").
Distributions generally package the versions of compilers that are needed to build the programs in their package manager. However, many developers want more control than that. They may want to use different versions of the compiler on different projects, or a different version than what’s packaged.
> Rust has had ten years since 1.0. It changes in backward compatible ways. For some people, they want no changes at all, so it’s important to nail down which sense is meant.
I’d love to see rust be so stable that MSRV is an anachronism. I want it to be unthinkable you wouldn’t have any reason not to support Rust from forever ago because the feature set is so stable.
For a little more color on 5, as a user of no_std Rust on embedded processors I use crates like heapless or trybox that provide Vec, String, etc. APIs like the std ones, but fallible.
Of course, two libraries that choose different no_std collection types can't communicate...but hey, we're comparing to C here.
and this things you can do in rust too, through with a bit of pain and limitations to how you write rust
and then there is the rest which looks "hard but doable" in C, but the more you learn about it the more it's a "uh wtf. nightmare" case where "let's kill+restart and have robustness even in presence of the process/error kernel dying" is nearly always the right answer.
Because C's assert gets compiled out if you have NDEBUG defined in your program. How do you do conditional compilation in Go (at the level of conditionally including or not including a statement)?
It's kinda sad to read as most of their arguments might seem right at first but if put under scrutiny really fall apart.
Like why defend C in 2025 when you only have to defend C in 2000 and then argue you have a old, stable, deeply tested, C code base which has no problem with anything like "commonly having memory safety issues" and is maintained by a small group of people very highly skilled in C.
Like that argument alone is all you need, a win, simple straight forward, hard to contest.
But most of the other arguments they list can be picked apart and are only half true.
> Other programming languages sometimes claim to be "as fast as C". But no other language claims to be faster than C for general-purpose programming, because none are.
Not OP, And I'm not really arguing with the post, but this struck me as a really odd thing to include in the article. Of course nothing is going to be faster then C, because it compiles straight to machine code with no garbage collection. Literally any language that does the same will be the same speed but not faster, because there's no way to be faster. It's physically impossible.
A much better statement, and one inline with the rest of the article, would be that at the time C and C++ were really the only viable languages that gave them the performance they wanted, and C++ wouldn't have given them the interoperability they wanted. So their only choice was C.
I think one additional factor that should be taken into account is the amount of effort required to achieve a given level of performance, as well as what extensions you're willing to accept. C with potentially non-portable constructs (intrinsics, inline assembly, etc.) and an unlimited amount of effort put into it provides a performance ceiling, but it's not inconceivable that other programming languages could achieve an equal level of performance with less effort, especially if you compare against plain standard C. Languages like ISPC that expose SIMD/parallelism in a more convenient manner is one example of this.
Another somewhat related example is Fortran and C, where one reason Fortran could perform better than C is the restrictions Fortran places on aliasing. In theory, one could use restrict in C to replicate these aliasing restrictions, but in practice restrict is used fairly sparingly, to the point that when Rust tried to enable its equivalent it had to back out the change multiple times because it kept exposing bugs in LLVM's optimizer.
The argument you propose only works for justifying a maintenance mode for and old codebase. If you want to take the chance to turn away new developers from complex abominations like C++ and Rust and garbage collected sloths like Java and get them to consider a comparatively simple but ubiquitous language that is C, you have to offer more.
As I write more code, use more software and read about rewrites...
The biggest gripe I have with a rewrite is... A lof of the time we rewrite for feature parity. Not the exact same thing. So you are kind ignoring/missing/forgetting all those edge cases and patches that were added along the way for so many niche or otherwise reasons.
This means broken software. Something which used to work before but not anymore. They'll have to encounter all of them again in the wild and fix it again.
Obviously if we are to rewrite an important piece of software like this, you'd emphasise more on all of these. But it's hard for me to comprehend whether it will be 100%.
But other than sqlite, think SDL. If it is to be rewritten. It's really hard for me to comprehend that it's negligible in effect. Am guessing horrible releases before it gets better. Users complaining for things that used work.
C is going to be there long after the next Rust is where my money is. And even if Rust is still present, there would be a new Rust then.
So why rewrite? Rewrites shouldn't be the default thinking no?
I'm curious about tptacek's comment (https://news.ycombinator.com/item?id=28279426). 'the "security" paragraphs in this page do the rest of the argument a disservice. The fact is, C is a demonstrable security liability for sqlite.'
The current doc no longer has any paragraphs about security, or even the word security once.
The 2021 edition of the doc contained this text which no longer appears: 'Safe languages are often touted for helping to prevent security vulnerabilities. True enough, but SQLite is not a particularly security-sensitive library. If an application is running untrusted and unverified SQL, then it already has much bigger security issues (SQL injection) that no "safe" language will fix.
It is true that applications sometimes import complete binary SQLite database files from untrusted sources, and such imports could present a possible attack vector. However, those code paths in SQLite are limited and are extremely well tested. And pre-validation routines are available to applications that want to read untrusted databases that can help detect possible attacks prior to use.'
It sounds like the core doesn't even allocate, and presumably the extended library allocates in limited places using safe patterns. So there wouldn't be much benefit from Rust anyway, I'd think. Had SQLite ever had a memory leak or use-after-delete bug on a production release? If so, that answers the question. But I've never heard of one.
Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.
> Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.
You can implement a linked list in Rust the same as you would in C using raw pointers and some unsafe code. In fact there is one in the standard library.
> Had SQLite ever had a memory leak or use-after-delete bug on a production release?
sure, it's an old library they had pretty much anything (not because they don't know what they are doing but because shit happens)
lets check CVEs of the last few years:
- CVE-2025-29088 type confusion
- CVE-2025-29087 out of bounds write
- CVE-2025-7458 integer overflow, possible in optimized rust but test builds check for it
- CVE-2025-6965 memory corruption, rust might not have helped
- CVE-2025-3277 integer overflow, rust might have helped
- CVE-2024-0232 use after free
- CVE-2023-36191 segmentation violation, unclear if rust would have helped
- CVE-2023-7104 buffer overflow
- CVE-2022-46908 validation logic error
- CVE-2022-35737 array bounds overflow
- CVE-2021-45346 memory leak
...
as you can see the majority of CVEs of sqlite are much less likely in rust (but a rust sqlite impl. likely would use unsafe, so not impossible)
as a side note there being so many CVEs in 2025 seem to be related to better some companies (e.g. Google) having done quite a bit of fuzz testing of SQLite
other takeaways:
- 100% branch coverage is nice, but doesn't guarantee memory soundness in C
- given how deeply people look for CVEs in SQLite the number of CVEs found is not at all as bad as it might look
but also one final question:
SQLite uses some of the best C programmers out there, only they merge anything to the code, it had very limited degree of change compared to a typical company project. And we still have memory vulnerabilities. How is anyone still arguing for C for new projects?
If I'm remembering a DuckDB talk I attended correctly, they chose C++ because they were most confident in their ability to write clear code in it which would be autovectorized by the compilers they were familiar with. Rust in 2019 didn't have a clear high level SIMD story yet and the developers (wisely) did not want to maintain handrolled SIMD code.
If maximum performance is a top objective, it is probably because C++ produces faster binaries with less code. Modern C++ specifically also has a lot of nice compile-time safety features, especially for database-like code.
The point about bounds checking in `safe' languages is well taken, it does prevent 100% test coverage. As we all agree, SQLite has been exhaustively tested, and arguments for bounds checking in it are therefore weakened. Still, that's not an argument for replicating this practice elsewhere, not unless you are Dr Hipp and willing to work very hard at testing. C.A.R. Hoare's comment on eliminating runtime checks in release builds is well-taken here: “What would we think of a sailing enthusiast who wears his life-jacket when training on dry land but takes it off as soon as he goes to sea?”
I am not Dr Hipp, and therefore I like run-time checks.
> The C language is old and boring. It is a well-known and well-understood language.
So you might think, but there is a committee actively undermining this, not to mention compiler people keeping things exciting also.
There is a dogged adherence to backward compatibility, so that you can't pretend C has not gone anywhere in thirty-five years, if you like --- provided you aren't invoking too much undefined behavior. (You can't as easily pretend that your compiler has not gone anywhere in 35 years with regard to things you are doing out of spec.)
The fact that a C library can easily be wrapped by just about any language is really useful. We're considering writing a library for generating a UUID (that contains a key and value) for reasons that make sense to us and I proposed writing this in C so we could simply wrap it as a library for all of the languages we use internally rather than having to re-implement it several times. Not sure if we'll actually build this library but if we do it will be in C (I did managed to get the "wrap it for each language" proposal pre-approved).
You can expose a C interface from many languages (C++, Rust, C# to name a few that I've personally used). Instead of introducing a new language entirely, it's probably better to write the library in one of the languages you already use.
It is. You can also write it in C++ or Rust and expose a C API+ABI, and then you're distributing a binary library that the OS sees as very similar to a C library.
Occasionally when working in Lua I'd write something low-level in C++, wrap it in C, and then call the C wrapper from Lua. It's extra boilerplate but damn is it nice to have a REPL for your C++ code.
Edit: Because someone else will say it - Rust binary artifacts _are_ kinda big by default. You can compile libstd from scratch on nightly (it's a couple flags) or you can amortize the cost by packing more functions into the same binary, but it is gonna have more fixed overhead than C or C++.
> It is. You can also write it in C++ or Rust and expose a C API+ABI, and then you're distributing a binary library that the OS sees as very similar to a C library.
If I want a "C Library", I want a "C Library" and not some weird abomination that has been surgically grafted to libstdc++ or similar (but be careful of which version as they're not compatible and the name mangling changes and ...).
This isn't theoretical. It's such a pain that the C++ folks started resorting to header-only libraries just to sidestep the nightmare.
This makes me less safe rather than more. Note that there is a substantial double standard here, we could never in the name of safety impose this level of burden from C tooling side because maintainers would rightfully be very upset (even toggling a warning in the default set causes discussions). For the same reason it should be unacceptable to use Rust before this is fixed, but somehow the memory safety absolutists convinced many people that this is more important than everything else. (I also think memory safety is important, but I can't help but thinking that pushing for Rust is more harmful to me than good. )
SQLite is a true landmark, c not withstanding it just happened to be the right tool at the right time and by now anything else is well not as interesting as what they have going on now; totally bucks the trend of throw away software.
It has async I/O support on Linux with io_uring, vector support, BEGIN CONCURRENT for improved write throughput using multi-version concurrency control (MVCC),
Encryption at rest, incremental computation using DBSP for incremental view maintenance and query subscriptions.
Time will tell, but this may well be the future of SQLite.
It should be noted that project has no affiliation with the SQLite project. They just use the name for promotional/aspirational purposes. Which feels incredibly icky.
Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.
> They just use the name for promotional/aspirational purposes. Which feels incredibly icky.
The aim is to be compatible with sqlite, and a drop-in replacement for it, so I think it's fair use.
> Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.
It's MIT license open-source. And unlike sqlite, encourages outside contribution. For this reason, I think it can "win".
Calling it “SQLite-compatible” would be one thing. That’s not what they do. They describe it as “the evolution of SQLite”.
It’s absolutely inappropriate and appropriative.
They’ve been poor community members from the start when they publicized their one-sided spat with SQLite over their contribution policy.
The reality is that they are a VC-funded company focused on the “edge database” hypetrain that’s already dying out as it becomes clear that CAP theorem isn’t something you can just pretend doesn’t exist.
It’ll very likely be dead in a few years, but even if it’s not, a VC-funded project isn’t a replacement for SQLite. It would take incredibly unique advantages to shift literally the entire world away from SQLite.
It’s a new thing, not the next evolution of SQLite.
The moment turso becomes stable , SQLite will inevitably fade away with time if they don’t rethink how contributions should be taken. I honestly believe the Linux philosophy of software development will be what catapults turso forward.
Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
If the branch is never taken, and the optimizer can prove it, it will remove the check. Sometimes if it can’t actually prove it there’s ways to help it understand, or, in the almost extreme case, you do what I commented below.
Yeah I don't understand the argument. If you can't convince the compiler that that branch will never be taken, then I strongly suspect that it may be taken.
That's not the point. The point is that if it is never taken, you can't test it. They don't care that it inserts a conditional OP to check, they care that they can't test the conditional path.
But, there is no conditional path when the type system can assure the compiler that there is nothing to be conditional about. Do they mean that it's impossible to be 100% sure about if there's a conditional path or not?
> Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
This is annoying in Rust. To me array accesses aren't the most annoying, it's match{} branches that will never been invoked.
There is unreachable!() for such situations, and you would hope that:
if array_access_out_of_bounds { unreachable!(); }
is recognised by the Rust tooling and just ignored. That's effectively the same as SQLite is doing now by not doing the check. But it isn't ignored by the tooling: unreachable!() is reported as a missed line. Then there is the test code coverage including the standard output by default, and you have to use regex's on path names to remove it.
sqlite3 has one (apparently this is called "the amalgamation") c source file that is ~265 kloc (!) long with external dependencies on zlib, readline and ncurses. built binaries are libsqlite3.so at 4.8M and sqlite3 at 6.1M.
turso has 341 rust source files spread across tens of directories and 514 (!) external dependencies that produce (in release mode) 16 libraries and 7 binaries with tursodb at 48M and libturso_sqlite3.so at 36M.
looks roughly an order of magnitude larger to me. it would be interesting to understand the memory usage characteristics in real-world workloads. these numbers also sort of capture the character of the languages. for extreme portability and memory efficiency, probably hard to beat c and autotools though.
But if you don't have the bounds checks in machine code, then you don't have bounds checks.
I suppose SQLite might use a C linter tool that can prove the bounds checks happen at a higher layer, and then elide redundant ones in lower layers, but... C compilers won't do that by default, they'll just write memory-unsafe machine code. Right?
So, the argument for keeping SQLite written in C is that it gives the user the choice to either:
- Build SQLite with Yolo-C, in which case you get excellent performance and lots of tooling. And it's boring in the way that SQLite devs like. But it's not "safe" in the sense of memory safe languages.
- Build SQLite with Fil-C, in which case you get worse (but still quite good) performance and memory safety that exceeds what you'd get with a Rust/Go/Java/whatever rewrite.
Recompiling with Fil-C is safer than a rewrite into other memory safe languages because Fil-C is safe through all dependencies, including the syscall layer. Like, making a syscall in Rust means writing some unsafe code where you could screw up buffer sizes or whatnot, while making a syscall in Fil-C means going through the Fil-C runtime.
One thing I found especially interesting is the section at the end about why Rust isn’t used. It leaves open the door and at least is constructive feedback to the Rust community
For a project that is functionally “done” switching doesn’t make sense. Something like kernel code where you know it’ll continue to evolve - there going through the pain may be worth it
Folks involved often do! Talking about what’s not great is the only path towards getting better, because you have to identify pain points in order to fix them.
I would go as far as saying that 90% of managing the project is properly communicating, discussing and addressing the ways in which Rust sucks. The all-hands in NL earlier this year was wall to wall meetings about how much things suck and what to do about them! I mean this in the best possible way. ^_^
> Recoding SQLite in Go is unlikely since Go hates assert()
Any idea what this refers to? assert is a macro in C. Is the implication that OP wants the capability of testing conditions and then turning off the tests in a production release? If so, then I think the argument is more that go hates the idea of a preprocessor. Or have I misunderstood the point being made?
Aren't SQLite’s bottlenecks primarily io-bound (not CPU)? If so, fopen, fread, or syscalls are the most important to performance and pure language efficiency wouldn't be limiter.
> Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Talking about C99, or C++11, and then “oh you need the nightly build of rust” were juxtaposed in such a way that I never felt comfortable banging out “yum install rust” and giving it a go.
Other than some operating systems projects, I haven’t run into a “requires nightly” in the wild for years. Most users use the stable releases.
(There are some decent reasons to use the nightly toolchain in development even if you don’t rely on any unfinished features in your codebase, but that means they build on stable anyway just fine if you prefer.)
Good to know, maybe I’ll give it a whirl. I’d been under the (mistaken, apparently) impression that if one didn’t update monthly they were going to have a bad time.
You may be running into forwards compatibility issues, not backwards compatibility issues, which is what nightly is about.
The Rust Project releases a new stable compiler every six weeks. Because it is backwards compatible, most people update fairly quickly, as it is virtually always painless. So this may mean, if you don’t update your compiler, you may try out a new package version and it may use features or standard library calls that don’t exist in the version you’re using, because the authors updated regularly. There’s been some developments in Cargo to try and mitigate some of this, but since it’s not what the majority of users do, it’s taken a while and those features landed relatively recently, so they’re not widely adopted yet.
Nightly features are ones that aren’t properly accepted into the language yet, and so are allowed to break in backwards incompatible ways at any time.
I don't want to sound cynical but a lot of it has to deal with the simplicity of the language. It's much harder to find a good Rust engineer than a C one. When all you have is pointers and structs it's much easier to meet the requirements for the role.
...If you're using the alloc/std crates (which to be fair, is probably the vast majority of Rust devs). libcore and the Rust language itself do not allocate at all, so if you use appropriate crates and/or build on top of libcore yourself you too can have an explicit-allocation Rust (though perhaps not as ergonomic as Zig makes it).
I think zig generally composes better than rust. With rust you pretty much have to start over if you want reusable / composable code, that is not use the default std. Rust has small crates for every little thing because it doesn't compose well, as well to improve compile times. libc in the default std also is major L.
It mainly comes down how the std is designed. Zig has many good building blocks like allocators, and how every function that allocates something takes one. This allows you to reuse the same code for different kind of situations.
Hash maps in zig std are another great example, where you can use adapter to completely change how the data is stored and accessed while keeping the same API [1]. For example to have map with limited memory bound that automatically truncates itself, in rust you need to either write completely new data structure for this or rely on someone's crate again (indexmap).
Errors in zig compose also better, in rust I find error handling really annoying. Anyhow makes it better for application development but you shouldn't use it if writing libraries.
When writing zig I always feel like I can reuse pieces of existing code by combining the building blocks at hand (including freestanding targets!). While in rust I always feel like you need go for the fully tailored solution with its own gotchas, which is ironic considering how many crates there are and how many crates projects depend on vs. typical zig projects that often don't depend on lots of stuff.
> Nearly all systems have the ability to call libraries written in C. This is not true of other implementation languages.
From section "1.2 Compatibility". How easy is it to embed a library written in Zig in, say, a small embedded system where you may not be using Zig for the rest of the work?
Also, since you're the submitter, why did you change the title? It's just "Why is SQLite Coded in C", you added the "and not Rust" part.
The article allocates the last section to explaining why Rust is not a good fit (yet) so I wanted the title to cover that part of the conversation since I believe it is meaningful. It illustrates the tradeoffs in software engineering.
Also, Rust needs a better stdlib. A crate for every little thing is kinda nuts.
One reason I enjoy Go is because of the pragmatic stdlib. On most cases, I can get away without pulling in any 3p deps.
Now of course Go doesn’t work where you can’t tolerate GC pauses and need some sort of FFI. But because of the stdlib and faster compilation, Go somehow feels lighter than Rust.
Rust doesn’t really need a better stdlib as much as a broader one, since it is intentionally narrow. Go’s stdlib includes opinions like net/http and templates that Rust leaves to crates. The trade-off is Rust favors stability and portability at the core, while Go favors out-of-the-box ergonomics. Both approaches work, just for different teams.
“None of the safe programming languages existed for the first 10 years of SQLite's existence. SQLite could be recoded in Go or Rust, but doing so would probably introduce far more bugs than would be fixed, and it may also result in slower code.”
Modern languages might do more than C to prevent programmers from writing buggy code, but if you already have bug-free code due to massive time, attention, and testing, and the rate of change is low (or zero), it doesn’t really matter what the language is. SQLIte could be assembly language for all it would matter.
> and the rate of change is low (or zero)
This jives with a point that the Google Security Blog made last year: "The [memory safety] problem is overwhelmingly with new code...Code matures and gets safer with time."
https://security.googleblog.com/2024/09/eliminating-memory-s...
You can find historical SQLite CVEs here
https://www.sqlite.org/cves.html
Note that although code matures the chances of C Human error bugs will never go to zero. We have some bad incidents like Heartbleed to show this.
Heartbleed was a great demonstration of critical systems that were under appreciated.
Too few maintainers, too security researchers and too little funding.
When writing systems as complicated and as sensitive as the leading encryption suite used globally, no language choice will save you from under resourcing.
Right, but I believe nobody can claim that Human error bugs go to zero for Rust code.
Agreed. I rather dislike the idea of "safe" coding languages. Fighting with a memory leak in an elixir app, for the past week. I never viewed c or c++ as unsafe. Writing code is hard, always has been, always will be. It is never safe.
This is a bit of a misunderstanding.
Safe code is just code that cannot have Undefined Behavior. C and C++ have the concept of "soundness" just like Rust, just no way to statically guard against it.
"SQLite could be recoded in Go or Rust, but doing so would probably introduce far more bugs than would be fixed, and it may also result in slower code."
We will see. On the Rust side there is Turso which is pretty active.
https://turso.tech/
there is already an sqlite port in Go :) https://gitlab.com/cznic/sqlite
I think beyond the historical reasons why C was the best choice when SQLite was being developed, or the advantages it has today, there's also just no reason to rewrite SQLite in another language.
We don't have to have one implementation of a lightweight SQL database. You can go out right now and start your own implementation in Rust or C++ or Go or Lisp or whatever you like! You can even make compatible APIs for it so that it can be a drop-in replacement for SQLite! No one can stop you! You don't need permission!
But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?
> But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?
Because a lot of language advocacy has degraded to telling others what you want them to do instead of showing by example what to do. The idea behind this is that language adoption is some kind of zero sum game. If you're developing project 'x' in language 'y' then you are by definition not developing it in language 'z'. This reduces the stature of language 'z' and the continued existence of project 'x' in spite of not being written in language 'z' makes people wonder if language 'z' is actually as much of a necessity as its proponents claim. And never mind the fact that if the decision in what language 'x' would be written were to be revisited by the authors of 'x' that not only language 'z' would be on the menu, but also languages 'd', 'l', 'j' and 'g'.
Given the common retort for why not try X project in Y new language is "it's barely used in other things. Let's wait and see it get industry adoption before trying it out" it's hard to see it as anything OTHER than a zero-sum game. As much as I like Rust I recognize some things like SQLite are better off in C. But the reason you see so much push for some new languages is because if they don't get and maintain regualr adoption, they will die off.
Plenty of programming languages gained mass adoption without such tactics.
Yeah.. I always remind myself of the netscape browser. A lesson in "if it's working to mess with it" My question is always the reverse. Why try it in Y new language. Is there some feature that Y provides that was missing in X? How often do those features come up.
Company I worked for decided to build out a new microservice in language Y. The whole company was writing in W and X, but they decided to write the new service in Y. When something goes wrong, or a bug needs fixing, 3 people in the company of over 100 devs know Y. Guess what management is doing.. Re-writing it in X.
One good reason is that people have written golang adapters, so that you can use sqlite databases without cgo.
I agree to what I think you're saying which is that "sqlite" has, to some degree, become so ubiquitous that it's evolved beyond a single implementation.
We, of course, have sqlite the C library but there is also sqlite the database file format and there is no reason we can't have an sqlite implementation in golang (we already do) and one in pure rust too.
I imagine that in the future that will happen (pure rust implementation) and that perhaps at some point much further in the future, that may even become the dominant implementation.
And, in fact, these implementations exist. At least in Rust, there's rqlite and turso.
Thanks for this, I fully agree. One frustration I have with the modern moment is the tendency to view anything more than five years old with disdain, as utterly irrelevant and obsolete. Maybe I’m just going old, but I like my technology dependable and boring, especially software. Glad to see someone express respect for the decades of expertise that have gone into things we take for granted.
> Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
Huh it's not everyday that I hear a genuinely new argument. Thanks for sharing.
I guess I don’t find that argument very compelling. If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.
This feels like chasing arbitrary 100% test coverage at the expense of safety. The code quality isn’t actually improved by omitting the checks even though it makes testing coverage go up.
In safety critical spaces you need to be able to trace any piece of a binary back to code back to requirements. If a piece of running code is implicit in code, it makes that traceability back to requirements harder. But I'd be surprised if things like bounds checks are really a problem for that kind of analysis.
I don’t see the issue. The operations which produce a bounds check are traceable back to the code which indexes into something.
What tools do you use for this? PlantUML?
Yeah sounds too clever by half, memory safe languages are less safe because they have bounds checks...maybe I could see it on a space shuttle? Well, only in the most CYA scenarios, I'd imagine.
> maybe I could see it on a space shuttle?
"Airbus confirms that SQLite is being used in the flight software for the A350 XWB family of aircraft."
https://www.sqlite.org/famous.html
Critical applications like that used to use ADA to get much more sophisticated checking than just bounds. No certified engineer would (should) ever design a safety critical system without multiple “unreachable” fail safe mechanisms
Next they’ll have to tell me about how they had to turn off inlining because it creates copies of code which adds some dead branches. Bounds checks are just normal inlined code. Any bounds checked language worth its salt has that coverage for all that stuff already.
Bear in mind that SQLite is used in embedded systems, and I absolutely wouldn’t be surprised to learn it’s in space.
> If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.
I don't think I would (personally) ever be comfortable asserting that a code branch in the machine instructions emitted by a compiler can't ever be taken, no matter what, with 100% confidence, during a large fraction of situations in realistic application or library development, as to do so would require a type system powerful enough to express such an invariant, and in that case, surely the compiler would not emit the branch code in the first place.
One exception might be the presence of some external formal verification scheme which certifies that the branch code can't ever be executed, which is presumably what the article authors are gesturing towards in item D on their list of preconditions.
The argument here is that they're confident that the bounds check isn't needed, and would prefer the compiler not insert one.
The choices therefore are:
1. No bound check
2. Bounds check inserted, but that branch isn't covered by tests
3. Bounds check inserted, and that branch is covered by tests
I'm skeptical of the claim that if (3) is infeasible then the next best option is (1)
Because if it is indeed an impossible scenario, then the lack of coverage shouldn't matter. If it's not an impossible scenario then you have an untested case with option (1) - you've overrun the bounds of an array, which may not be a branch in the code but is definitely a different behaviour than the one you tested.
I'm confused about the claim though. These branches are not at the source level, and test coverage usually is measured at the source level.
If a code branch can't be ever taken. Doesn't that mean you do not need it? Basically it must be code that will not get executed. So leaving it out does not matter.
If you then can come up a scenario where you need it. Well in fully tested code you do need to test it.
There is a whole 'nother level of safety validation that goes beyond your everyday OWASP, or heck even what we consider "highly regulated" industry requirements that 95-99% of us devs care about. SQLite is used in some highly specialized, highly sensitive environments, where they are concerned about bit flips, and corrupted memory. I had the luxury of sitting through Richard Hipp's talk about it one time, but I am certainly butchering it.
You didn't understand the argument. The testing is what instills the confidence.
So is the argument that safe langs produce stuff like:
that are never actually run if the code is correct? But (if I understand correctly) these are checks implicitly added by the compiler. So the objection amounts to questioning the correctness of this auto-generated code, and is predicated upon mistrusting the correctness of the compiler? But presumably the Rust compiler itself would have thorough tests that these kinds of checks work?Someone please correct me if I'm misunderstanding the argument.
One of the things that SQLite is explicitly designed to do is have predictable behavior in a lot of conditions that shouldn't happen. One of those predictable behavior is that it does its best to stay up and running, and continuing to do the best it can. Conditions where it should succeed in doing this include OOM, the possibility of corrupted data files, and (if possible) misbehaving CPUs.
Automatic array bounds checks can get hit by corrupted data. Thereby leading to a crash of exactly the kind that SQLite tries to avoid. With complete branch testing, they can guarantee that the test suite includes every kind of corruption that might hit an array bounds check, and guarantee that none of them panic. But if the compiler is inserting branches that are supposed to be inaccessible, you can't do complete branch testing. So now how do you know that you have tested every code branch that might be reached from corrupted data?
Furthermore those unused branches are there as footguns which are reachable with a cosmic ray bit flip, or a dodgy CPU. Which again undermines the principle of keeping running if at all possible.
In rust at least you are free to access an array via .get which returns an option and avoids the “compiler inserted branch” (which isn’t compiler inserted by the way - [] access just implicitly calls unwrap on .get and sometimes the compiler isn’t able to elide).
Also you rarely need to actually access by index - you could just access using functional methods on .iter() which avoids the bounds check problem in the first place.
For slices the access is handled inside of the compiler: https://github.com/rust-lang/rust/blob/235a4c083eb2a2bfe8779...
I'm checking to see how array access is implemented, whether through deref to slice, or otherwise.
I had Vec in mind but regardless nothing forces you to use the bounds-checked variant vs one that returns option<t>. And if you really are sure the bounds hold you can always use the assume crate or just unwrap_unchecked explicitly.
Keeping running if possible doesn't sound like the best strategy for stability. If data was corrupted in memory in a was that would cause a bounds check to fail then carrying on is likely to corrupt more data. Panic, dump a log, let a supervisor program deal with the next step, or a human, but don't keep going potentially persisting corrupted data.
What the best strategy is depends on your use case.
The use case that SQLite has chosen to optimize for is critical embedded software. As described in https://www.sqlite.org/qmplan.html, the standard that they base their efforts on is a certification for use in aircraft. If mission critical software on a plane is allowed to crash, this can render the controls inoperable. Which is likely to lead to a very literal crash some time later.
The result is software that has been optimized to do the right thing if at all possible, and to degrade gracefully if that is not possible.
Note that the open source version of SQLite is not certified for use in aviation. But there are versions out there that have been certified. (The difference is a ton of extra documentation.) And in fact SQLite is in use by Airbus. Though the details of what exactly for are not, as far as I know, public.
If this documented behavior is not what you want for your use case, then you should consider using another database. Though, honestly, no other database comes remotely close when it comes to software quality. And therefore I doubt that "degrade as documented rather than crash" is a good reason to avoid SQLite. (There are lots of other potential reasons for choosing another database.)
outside political definitions, I'm not sure "crash and restart with a supervisor" and "don't crash" are meaningfully different? they're both error-handling tactics, likely perfectly translatable to each other, and Erlang stands as an existence proof that crashing is a reasonable strategy in extremely reliable software.
I fully recognize that political definitions drive purchases, so it's meaningful to a project either way. but that doesn't make it a valid technical argument.
It still needs to detect that there is corrupted data, dump the log and the supervisor would not be the best if it was external since in some runtimes it could be missing, they just build it into it and we came full circle.
I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior. If you haven’t tested a given control flow, the issue is that it’s possible that the end result is some indeterminate or invalid state for the whole program, not that the given bounds check doesn’t panic the way it’s supposed to. On embedded for example (which is an important usecase for SQLite) this could result in orphaned or broken resources.
> I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior.
The way I was thinking about it was: if you somehow magically knew that nothing added by the compiler could ever cause a problem, it would be redundant to test those branches. Then wondering why a really well tested compiler wouldn't be equivalent to that. It sounds like the answer is, for the level of soundness sqlite is aspiring to, you can't make those assumptions.
But does it matter if that control flow is unreachable?
If the check never fails, it is logically equivalent to not having the check. If the code isn't "correct" and the panic is reached, then the equivalent c code would have undefined behavior, which can be much worse than a panic.
In the first case, if it is actually unreachable, I would never want that code ending up in my binary at all. It must be optimised out.
Your second case implies that it is reachable.
> But (if I understand correctly) these are checks implicitly added by the compiler.
This is a dubious statement. In Rust, the array indexing operator arr[i] is syntactic sugar for calling the function arr.index(i), and the implementation of this function on the standard library's array types is documented to perform a bounds-check assertion and access the element.
So the checks aren't really implicitly added -- you explicitly called a function that performs a bounds check. If you want different behavior, you can call a different, slightly-less-ergonomic indexing function, such as `get` (which returns an Option, making your code responsible for handling the failure case) or `get_unchecked` (which requires an unsafe block and exhibits UB if the index is out of bounds, like C).
Another commenter in this thread used the phrase "complex abomination" which seems more and more apt the more I learn about Rust.
> questioning the correctness of this auto-generated code
I wouldn't put it that way. Usually when we say the compiler is "incorrect", we mean that it's generating code that breaks the observable behavior of some program. In that sense, adding extra checks that can't actually fail isn't a correctness issue; it's just an efficiency issue. I'd usually say the compiler is being "conservative" or "defensive". However, the "100% branch testing" strategy that we're talking about makes this more complicated, because this branch-that's-never-taken actually is observable, not to the program itself but to its test suite.
no it's a (accidental) red Hering argument
sure safety checks are added but
it's ignoring that many of such checks get reliably optimized away
worse it's a bit like saying "in case of a broken invariant I prefer arbitrary potential highly problematic behavior over clean aborts (or errors) because my test tooling is inadequate"
instead of saying "we haven't found adequate test tooling" for our use case
Why inadequate? Because technically test setups can use
1. fault injection to test such branches even if normally you would never hit them
2. for many of such tests (especially array bound checks) you can pretty reliably identify them and then remove them from your test coverage statistic
idk. what the tooling of rust wrt this is in 2025, but around the rust 1.0 times you mainly had C tooling you applied to rust so you had problems like that back then.
It's not like that, the compiler explicitly doesn't do compile-time checks here and offloads those to the runtime.
Rust does not stop you from writing code that accesses out of bounds, at all. It just makes sure that there's an if that checks.
Ok, but you can still test all the branches in your source code and have 100% coverage. Those additional `if` branches are added by the compiler. You are responsible for testing the code you write, not the one that actually runs. Your compiler's test suite is responsible for the rest.
By the same logic one could also claim that tail recursion optimisation, or loop unrolling are also dangerous because they change the way code works, and your tests don't cover the final output.
If they produce control flow _in the executable binary_ that is untested, then they could conceivably lead to broken states. I don’t believe most of those sorts of transformations cause alternative control flows to be added to the executable binary.
I don’t think anyone would find the idea compelling that “you are only responsible for the code you write, not the code that actually runs” if the code that actually runs causes unexpected invalid behavior on millions of mobile devices.
>You are responsible for testing the code you write, not the one that actually runs.
Hipp worked as a military contractor for battleships, furthermore years later SQLite was under contract under every proto-smartphone company in the USA. Under these constraints you maybe are not responsible to test what the compiler spits out across platforms and different compilers, but doing that makes the project a lot more reliable, makes it sexier for embedded and weapons.
I don't see anything wrong with taking responsibility for the code that actually runs. I would argue it's that level of accountability has played a part in Sqlite being such a great project.
> You are responsible for testing the code you write, not the one that actually runs.
This is not correct for every industry.
It's the sort of argument that I wouldn't accept from most people and most projects, but from Dr Hipp isn't most people and Sqlite isn't most projects.
It's a bad argument.
Certainly don't get me wrong, SQLite is one of the best and most thoroughly tested libraries out there. But this was an argument to have 4 arguments. That's because 2 of the arguments break down as "Those languages didn't exist when we first wrote SQLite and we aren't going to rewrite the whole library just because a new language came around."
Any language, including C, will emit or not emit instructions that are "invisible" to the author. For example, whenever the C compiler decides it can autovectorize a section of a function it'll be introducing a complicated set of SIMD instructions and new invisible branch tests. That can also happen if the C compiler decides to unroll a loop for whatever reason.
The entire point of compilers and their optimizations is to emit instructions which keep the semantic intent of higher level code. That includes excluding branches, adding new branches, or creating complex lookup tables if the compiler believes it'll make things faster.
Dr Hipp is completely correct in rejecting Rust for SQLite. Sqlite is already written and extremely well tested. Switching over to a new language now would almost certainly introduce new bugs that don't currently exist as it'd inevitably need to be changed to remain "safe".
> Any language, including C, will emit or not emit instructions that are "invisible" to the author
Presumably this is why they do 100% test coverage. All of those instructions would be tested and not invisible to the test suite
If it was as completely tested as claimed, then switching to rust would be trivial. All you need to do is pass the test suite and all bugs would be gone. I can think of other reasons not to jump to rust (it is a lot of code, sqlite already works well, and test coverage is very good but also incomplete, and rust only solves a few correctness problems)—just not because of claiming sqlite is already tested enough to be bug free of the kinds of issues that rust might actually prevent.
> If it was as completely tested as claimed
It is.
> then switching to rust would be trivial
So prove it. Hint: it's not trivial.
> to rust would be trivial.
no, you still need to rewrite, re-optimize, etc. everything
it would make it much easier to be fully compatible, sure, but that doesn't make it trivial
furthermore part of it's (mostly internal) design are strongly influenced by C specific dev-UX aspects, so you wouldn't write them the same, so test for them (instead of integration tests) may not apply
which in general also means that you most likely would break some special purpose/usual user which do have "brittle" (not guaranteed) assumptions about SQLite
if you have code which very little if at all changes and has no major issues, don't rewrite it
but most of the new "external" things written around SQLite, alternative VFS impl. etc. tend to be at most partially written in C
I wonder if this problem could be mitigated by not requiring coverage of branches that unconditionally lead to panics. or if there could be some kind of marking on those branches that indicate that they should never occur in correct code
You'd want to statically prove that any panic is unreachable
Couldn't a method like `get_unchecked()` be used to avoid the bounds check[0] if you know it's safe?
0: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.get...
Yes. You have to write `unsafe { ... }` around it, so there's an ergonomic penalty plus a more nebulous "sense that you're doing something dangerous that might get some skeptical looks in code review" penalty, but the resulting assembly will be the same as indexing in C.
I figured, but I guess I don't understand this argument then. SQLite as a project already spends a lot of time on quality so doing some `unsafe` blocks with a `// SAFETY:` comment doesn't seem unreasonable if they want to avoid the compiler inserting a panic branch for bounds checks.
If you put unsafe around almost all of your code (array indexing) aren't you better off just writing C?
Perhaps if the only thing you're doing is array indexing? Though I'm not sure that would apply in this particular case anyways.
In many cases LLVM can prove the bounds check is redundant or otherwise is unnecessary and will optimize it away.
It's new because it makes no sense.
There already is an implicit "branch" on every array access in C, it's called an access violation.
Do they test for a segfault on every single array access in the code base? No? Then they don't really have 100% branch coverage, do they?
I think those branches are often not there because it's provably never going out of bounds. There are ways to ensure the compiler knows the bounds cannot be broken.
It's interesting to consider (and the whole page is very well-reasoned), but I don't think that the argument holds up to scrutiny. If such an automatic bounds-check fails, then the program would have exhibited undefined behavior without that branch -- and UB is strictly worse than an unreachable branch that does something well-specified like aborting.
A simple array access in C:
...can be thought of as being equivalent to: where the "UB" function can do literally anything. From the perspective of exhaustively testing and formally verifying software, I'd rather have the safe-language equivalent: ...because at least I can reason about what happens if the supposedly-unreachable condition occurs.Dr. Hipp mentions that "Recoding SQLite in Go is unlikely since Go hates assert()", implying that SQLite makes use of assert statements to guard against unreachable conditions. Surely his testing infrastructure must have some way of exempting unreachable assert branches -- so why can't bounds checks (that do nothing but assert undefined behavior does not occur) be treated in the same way?
The 100% branch testing is on the compiled binary. To exempt unreachable assert branches, turn off assertions, compile, and test.
A more complex C program can have index range checking at a different place than the simple array access. The compiler's flow analysis isn't always able to confirm that the index is guaranteed to be checked. If it therefore adds a cautionary (and unneeded) range check, then this code branch can never be exercised, making the code no longer 100% branch tested.
the problem is it's kinda an anti argument
you basically say if deeply unexpected things happen you prefer you program doing widely arbitrary and as such potentially dangerous things over it having a clean abort or proper error. ... that doesn't seem right
worse it's due to a lack of the used tooling and not a fundamental problem, not only can you test this branches (using fault injection) you also often (not always) can separate them from relevant branches when collecting the branch statistics
so the while argument misses the point (which is tooling is lacking, not extra checks for array bounds and similar)
lastly array bounds checking is probably the worst example they could have given as it
- often can be disabled/omitted in optimized builds
- is quite often optimized away
- has often quite low perf. overhead
- bound check branches are often very easy to identify, i.e. excluding them from a 100% branch testing statistic is viable
- out of bounds read/write are some of the most common cases of memory unsafety leading to security vulnerability (including full RCE cases)
This is a dumb argument, it's like saying for a perfect human being there's no need for smart pointers, garbage collection or the borrow checker.
> In incorrect code, the branches are taken, but code without the branches just behaves unpredictably.
It's like seat belts.
E.g. what if we drive four blocks and then the case occurs when the seatbelt is needed need the seatbelt? Okay, we have an explicit test for that.
But we cannot test everything. We have not tested what happens if we drive four blocks, and then take a right turn, and hit something half a block later.
Screw it, just remove the seatbelts and not have this insane untested space whereby we are never sure whether the seat belt will work properly and prevent injury!
> All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include:
- Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
- Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
- Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
- Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
- Rust needs a mechanism to recover gracefully from OOM errors.
- Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
1. Rust has had ten years since 1.0. It changes in backward compatible ways. For some people, they want no changes at all, so it’s important to nail down which sense is meant.
2. This has been demonstrated.
3. This one hinges on your definition of “obscure,” but the “without an operating system” bit is unambiguously demonstrated.
4. I am not an expert here, but given that you’re testing binaries, I’m not sure what is Rust specific. I know the Ferrocene folks have done some of this work, but I don’t know the current state of things.
5. Rust as a language does no allocation. This OOM behavior is the standard library, of which you’re not using in these embedded cases anyway. There, you’re free to do whatever you’d like, as it’s all just library code.
6. This also hinges on a lot of definitions, so it could be argued either way.
> 2.
ironically if we look at how things play out in practice rust is far more suited as general purpose languages then C, to a point where I would argue C is only a general purpose language on technicality not on practical IRL basis
this is especially ridiculous when they argue C is the fasted general purpose language when that has proven to simply not hold up to larger IRL projects (i.e. not micro benchmarks)
C has terrible UX for generic code re-use and memory management, this often means that in IRL projects people don't write the fasted code. Wrt. memory management it's not rare to see unnecessary clones, as not doing so it to easy to lead to bugs. Wrt. data structures you write the code which is maintainable, robust and fast enough and sometimes add the 10th maximal simple reimplementation (or C macro or similar) of some data structure instead of using reusing some data structures people spend years of fine tuning.
When people switched a lot from C to C++ most general purpose projects got faster, not slower. And even for the C++ to Rust case it's not rare that companies end up with faster projects after the switch.
Both C++ and Rust also allow more optimization in general.
So C is only fastest in micro benchmarks after excluding stuff like fortran for not being general purpose while itself not really being used much anymore for general purpose projects...
I think Rust (and C++) are just too complicated and visually ugly, and ultimately that hurts the maintainability of the code. C is simple, universal, and arguably beautiful to look at.
C is so simple, that you will need to read a 700-page, comitee-written manual befor you can attempt to write it correctly.
These are all opinions.
Rust has dependency hell and supply chain attacks like with npm.
The lack of dependency hell is a bit of an illusion when it comes to C. What other languages solve via library most C projects will reimplement themselves, which of course increases the chance for bugs.
But is optional. For this kind of project, is logical to adopt something like the tiger battle ethos and own all the code and have no external deps (or vendor them). Even do your own std if wanna.
Is hard work? But is not that different from what you see in certain C projects that neither use external deps
Tigerbeetle. Your autocorrect really mangled that one ...
You control the dependencies you put in Cargo.toml.
What about the dependencies of your dependencies?
I don't put too many things in Cargo.toml and it still pulls like a hundred things
Then don't? In C you would just implement everything yourself, so go do that in Rust if you don't want dependencies.
In C I've seen more half-baked json implementations than I can count on my fingers because using dependencies is too cumbersome in that ecosystem and people just write it themselves but most of the time with more bugs.
Your system is going to be owned, but at least, it's going to be "memory safely" owned!
P. S.
I you don't account all the unsafe sections scattered everywhere in all those dependencies.
One question towards maturity: has any working version of the Rust compiler ever existed? By which I mean one that successfully upholds the memory-safety guarantees Rust is supposed to make, and does not have any "soundness holes" (which IIRC were historically used as a blank check / excuse to break backwards compatibility).
The current version of the Rust compiler definitely doesn't -- there's known issues like https://github.com/rust-lang/rust/issues/57893 -- but maybe there's some historical version from before the features that caused those problems were introduced.
has there ever been a modern optimizing C compiler free of pretty serious bugs? (it's a rhetoric question, there hasn't been any)
Every compiler has soundness bugs. They’re just programs like any other. This isn’t exclusive to Rust.
In general, the way Rust blurs the line between "bugs in the compiler" and "problems with how the language is designed" seems pretty harmful and misleading. But it's also a core part of the marketing strategy, so...
What makes you say this is a core part of the marketing strategy? I don’t think Rust’s marketing has ever focused on compiler bugs or their absence.
You are correct that Rust's marketing does not claim that there are no bugs in its compiler. In fact it does the opposite: it suggests that there are no problems with the language, by asserting that any observed issue in the language is actually a bug in the compiler.
Like, in the C world, there's a difference between "the C specification has problems" and "GCC incorrectly implements the C specification". You can make statements about what "the C language" does or doesn't guarantee independently of any specific implementation.
But "the Rust language" is not a specification. It's just a vague ideal of things the Rust team is hoping their compiler will be able to achieve. And so "the Rust language" gets marketed as e.g. having a type system that guarantees memory safety, when in fact no such type system has been designed -- the best we have is a compiler with a bunch of soundness holes. And even if there's some fundamental issue with how traits work that hasn't been resolved for six years, that can get brushed off as merely a compiler bug.
This propagates down into things like Rust's claims about backwards compatibility. Rust is only backwards-compatible if your programs are written in the vague-ideal "Rust language". The Rust compiler, the thing that actually exists in the real world, has made a lot of backwards-incompatible changes. But these are by definition just bugfixes, because there is no such thing as a design issue in "the Rust language", and so "the Rust language" can maintain its unbroken record of backwards-compatibility.
> And even if there's some fundamental issue with how traits work that hasn't been resolved for six years, that can get brushed off as merely a compiler bug.
Is it getting brushed off as merely a compiler bug? At least if I'm thinking of the same bug as you [0] the discussion there seems to be more along the lines of the devs treating it as a "proper" language issue, not a compiler bug. At least as far as I can tell there hasn't been a resolution to the design issue, let alone any work towards implementing a fix in the compiler.
The soundness issue that I see more frequently get "brushed off as merely a compiler bug" is the lifetime variance one underpinning cve-rs [1], which IIRC the devs have long decided what the proper behavior should be but actually implementing said behavior is blocked behind some major compiler reworks.
> has made a lot of backwards-incompatible changes
Not sure I've seen much evidence for "a lot" of compatibility breaks outside of the edition system. Perhaps I'm just particularly (un)lucky?
> because there is no such thing as a design issue in "the Rust language"
I'm not sure any of the Rust devs would agree? Have any of them made a claim along those lines?
[0]: https://github.com/rust-lang/rust/issues/57893
[1]: https://github.com/Speykious/cve-rs
> Is it getting brushed off as merely a compiler bug?
Yes, this thread contains an example: https://news.ycombinator.com/item?id=45587209 . (I linked the same bug you did in the comment that that's a reply to.)
The Rust team may see this as a language design issue internally, and I'd be inclined to agree. Rust's outward-facing marketing does not reflect this view.
> I linked the same bug you did in the comment that that's a reply to
Ah, my apologies. Not sure exactly how I managed to miss that.
That being said, I guess I might have read that bit of your comment different than you had in mind; I was thinking of whether the Rust devs were dismissing language design issues as compiler bugs, not what third parties (albeit one with an unusually relevant history in this case) may think.
> Rust's outward-facing marketing does not reflect this view.
As above, perhaps I interpret the phrase "outward-facing marketing" differently than you do. I typically associate that (and "marketing" in general, in this context) with more official channels, whether that's official posts or posts by active devs in an official capacity.
Oh, I didn't realize steveklabnik wasn't an official member of the project anymore (as of 2022 apparently: https://blog.rust-lang.org/2022/01/31/changes-in-the-core-te... ). I do think he still expressed this position back when he was a major public face of the language, but it seems unfair to single him out and dig through his comment history.
Rust's marketing is pretty grassroots in general, but even current official sources like https://rust-lang.org/ say things like "Rust’s rich type system and ownership model guarantee memory-safety" that are only true of the vague-ideal "Rust language" and are not true of the type system they actually designed and implemented in the Rust compiler.
"1. Rust has had ten years since 1.0. ..."
Rust insists on its own package manager "rustup" and frowns on distro maintainers. When Rust is happy to just be packaged by the distro and rustup has gone away, then it will have matured to at least adolescence.
Rust has long worked with distro package maintainers, and as far as I know, Rust is packaged in every major Linux distribution.
There are other worlds out there than Linux.
So why insist on rustup?
different goals
the rust version packaged in distros is for compiling rust code shipped as part of the distro. This means it
- is normally not the newest version (which , to be clear, is not bad per see, but not necessary what you need)
- might not have all optional components (e.g. no clippy)
but if you idk. write a server deployed by you company
- you likely want all components
- you don't need to care what version the distro pinned
- you have little reason not to use the latest rust compiler
for other use cases you have other reasons, some need nightly rust, some want to test against beta releases, some want to be able to test against different rust versions etc. etc.
rustup exist (today) for the same reason why a lot of dev projects use project specific copies of all kinds of tooling and libraries which do not match whatever their distro ships: The distro use-case and generic dev-use case have diverging requirements! (Other examples nvm(node), flutter, java etc.).
Also some distros are notorious for shipping outdated software (debian "stable").
And not everything is Linux, rustup works on OSX.
Distributions generally package the versions of compilers that are needed to build the programs in their package manager. However, many developers want more control than that. They may want to use different versions of the compiler on different projects, or a different version than what’s packaged.
Basically, people use it because they prefer it.
> Rust has had ten years since 1.0. It changes in backward compatible ways. For some people, they want no changes at all, so it’s important to nail down which sense is meant.
I’d love to see rust be so stable that MSRV is an anachronism. I want it to be unthinkable you wouldn’t have any reason not to support Rust from forever ago because the feature set is so stable.
> I want it to be unthinkable you wouldn’t have any reason not to support Rust from forever ago because the feature set is so stable.
What other languages satisfy this criteria?
Fortran, cobol, C or other old languages that stopped changing but are still used.
All three of the languages you list are still actively updated. Coincidentally, the latest standard for all three of them is from 2023(ish):
- C23: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3096.pdf
- Cobol 2023: https://www.incits.org/news-events/news-coverage/available-n... (random press release since a PDF of the standard didn't immediately show up in a search)
- Fortran 2023: https://wg5-fortran.org/N2201-N2250/N2212.pdf
C2Y has a fair number of already-accepted features as well and it's relatively early in the standard release cycle: https://thephd.dev/c2y-hitting-the-ground-running
Can’t compile with just a PDF file, though.
Yes, compilers will take some time to implement the new standards.
C23 seems to have decent support from a few compilers, with GCC leading the pack: https://en.cppreference.com/w/c/compiler_support/23.html
gcobol supports (or at least aims to support?) COBOL 2023: https://gcc.gnu.org/onlinedocs/gcc-15.1.0/gcobol/gcobol.html. Presumably there are other compilers working on support as well.
Intel's Fortran compiler and LFortran have partial support for Fortran 2023 (https://www.intel.com/content/www/us/en/developer/articles/t..., https://docs.lfortran.org/en/usage/). I'd guess that support both from these compilers and from other compilers (Flang?) would improve over time as well..
For a little more color on 5, as a user of no_std Rust on embedded processors I use crates like heapless or trybox that provide Vec, String, etc. APIs like the std ones, but fallible.
Of course, two libraries that choose different no_std collection types can't communicate...but hey, we're comparing to C here.
even OOM isn't that different
like there are some things you can well in C
and this things you can do in rust too, through with a bit of pain and limitations to how you write rust
and then there is the rest which looks "hard but doable" in C, but the more you learn about it the more it's a "uh wtf. nightmare" case where "let's kill+restart and have robustness even in presence of the process/error kernel dying" is nearly always the right answer.
Why can't `if condition { panic(err) }' be used in go as an assert equivalent?
Because C's assert gets compiled out if you have NDEBUG defined in your program. How do you do conditional compilation in Go (at the level of conditionally including or not including a statement)?
It's kinda sad to read as most of their arguments might seem right at first but if put under scrutiny really fall apart.
Like why defend C in 2025 when you only have to defend C in 2000 and then argue you have a old, stable, deeply tested, C code base which has no problem with anything like "commonly having memory safety issues" and is maintained by a small group of people very highly skilled in C.
Like that argument alone is all you need, a win, simple straight forward, hard to contest.
But most of the other arguments they list can be picked apart and are only half true.
> But most of the other arguments they list can be picked apart and are only half true
I'd like to see you pick the other arguments apart
> Other programming languages sometimes claim to be "as fast as C". But no other language claims to be faster than C for general-purpose programming, because none are.
Not OP, And I'm not really arguing with the post, but this struck me as a really odd thing to include in the article. Of course nothing is going to be faster then C, because it compiles straight to machine code with no garbage collection. Literally any language that does the same will be the same speed but not faster, because there's no way to be faster. It's physically impossible.
A much better statement, and one inline with the rest of the article, would be that at the time C and C++ were really the only viable languages that gave them the performance they wanted, and C++ wouldn't have given them the interoperability they wanted. So their only choice was C.
I think one additional factor that should be taken into account is the amount of effort required to achieve a given level of performance, as well as what extensions you're willing to accept. C with potentially non-portable constructs (intrinsics, inline assembly, etc.) and an unlimited amount of effort put into it provides a performance ceiling, but it's not inconceivable that other programming languages could achieve an equal level of performance with less effort, especially if you compare against plain standard C. Languages like ISPC that expose SIMD/parallelism in a more convenient manner is one example of this.
Another somewhat related example is Fortran and C, where one reason Fortran could perform better than C is the restrictions Fortran places on aliasing. In theory, one could use restrict in C to replicate these aliasing restrictions, but in practice restrict is used fairly sparingly, to the point that when Rust tried to enable its equivalent it had to back out the change multiple times because it kept exposing bugs in LLVM's optimizer.
The argument you propose only works for justifying a maintenance mode for and old codebase. If you want to take the chance to turn away new developers from complex abominations like C++ and Rust and garbage collected sloths like Java and get them to consider a comparatively simple but ubiquitous language that is C, you have to offer more.
Is SQLite looking for new developers? Will they ever need a large amount of developers like a mega-corp that needs to hire 100 React engineers?
No, but as morbid as this sounds, the three(?) devs one day will pass away so now what?
Then the rights will be sold to a FAANG or an open souce fork like libSQL will live on.
(it’s from 2017)
As I write more code, use more software and read about rewrites...
The biggest gripe I have with a rewrite is... A lof of the time we rewrite for feature parity. Not the exact same thing. So you are kind ignoring/missing/forgetting all those edge cases and patches that were added along the way for so many niche or otherwise reasons.
This means broken software. Something which used to work before but not anymore. They'll have to encounter all of them again in the wild and fix it again.
Obviously if we are to rewrite an important piece of software like this, you'd emphasise more on all of these. But it's hard for me to comprehend whether it will be 100%.
But other than sqlite, think SDL. If it is to be rewritten. It's really hard for me to comprehend that it's negligible in effect. Am guessing horrible releases before it gets better. Users complaining for things that used work.
C is going to be there long after the next Rust is where my money is. And even if Rust is still present, there would be a new Rust then.
So why rewrite? Rewrites shouldn't be the default thinking no?
"Why SQLite is coded in C..." is an explanation, as documented at sqlite.org.
"Why is SQLite coded in C and not Rust?" is a question, which immediately makes me want to ask "Why do you need SQLite coded in Rust?".
Because the title has been editorialized.
fwiw there's a project doing just that: https://github.com/tursodatabase/turso
they have a blog hinting at some answers as to "why": https://turso.tech/blog/introducing-limbo-a-complete-rewrite...
Indeed. Why is SQLite coded in C and not BASIC?
Two previous, and substantial, discussions on this page:
https://news.ycombinator.com/item?id=28278859 - August 2021
https://news.ycombinator.com/item?id=16585120 - March 2018
I'm curious about tptacek's comment (https://news.ycombinator.com/item?id=28279426). 'the "security" paragraphs in this page do the rest of the argument a disservice. The fact is, C is a demonstrable security liability for sqlite.'
The current doc no longer has any paragraphs about security, or even the word security once.
The 2021 edition of the doc contained this text which no longer appears: 'Safe languages are often touted for helping to prevent security vulnerabilities. True enough, but SQLite is not a particularly security-sensitive library. If an application is running untrusted and unverified SQL, then it already has much bigger security issues (SQL injection) that no "safe" language will fix.
It is true that applications sometimes import complete binary SQLite database files from untrusted sources, and such imports could present a possible attack vector. However, those code paths in SQLite are limited and are extremely well tested. And pre-validation routines are available to applications that want to read untrusted databases that can help detect possible attacks prior to use.'
https://web.archive.org/web/20210825025834/https%3A//www.sql...
It sounds like the core doesn't even allocate, and presumably the extended library allocates in limited places using safe patterns. So there wouldn't be much benefit from Rust anyway, I'd think. Had SQLite ever had a memory leak or use-after-delete bug on a production release? If so, that answers the question. But I've never heard of one.
Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.
> Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.
You can implement a linked list in Rust the same as you would in C using raw pointers and some unsafe code. In fact there is one in the standard library.
Rust’s memory safety guarantees aren’t exclusive to hep allocation. In fact, the language doesn’t heap allocate at all.
You can write a linked list the same way you would in C if you wish.
> Had SQLite ever had a memory leak or use-after-delete bug on a production release?
sure, it's an old library they had pretty much anything (not because they don't know what they are doing but because shit happens)
lets check CVEs of the last few years:
- CVE-2025-29088 type confusion
- CVE-2025-29087 out of bounds write
- CVE-2025-7458 integer overflow, possible in optimized rust but test builds check for it
- CVE-2025-6965 memory corruption, rust might not have helped
- CVE-2025-3277 integer overflow, rust might have helped
- CVE-2024-0232 use after free
- CVE-2023-36191 segmentation violation, unclear if rust would have helped
- CVE-2023-7104 buffer overflow
- CVE-2022-46908 validation logic error
- CVE-2022-35737 array bounds overflow
- CVE-2021-45346 memory leak
...
as you can see the majority of CVEs of sqlite are much less likely in rust (but a rust sqlite impl. likely would use unsafe, so not impossible)
as a side note there being so many CVEs in 2025 seem to be related to better some companies (e.g. Google) having done quite a bit of fuzz testing of SQLite
other takeaways:
- 100% branch coverage is nice, but doesn't guarantee memory soundness in C
- given how deeply people look for CVEs in SQLite the number of CVEs found is not at all as bad as it might look
but also one final question:
SQLite uses some of the best C programmers out there, only they merge anything to the code, it had very limited degree of change compared to a typical company project. And we still have memory vulnerabilities. How is anyone still arguing for C for new projects?
> How is anyone still arguing for C for new projects?
It just works
That list alone sounds like it does not work.
As long as it is possible to produce a OOB in something as simple as a matrix transpose, Rust also does not work: https://rustsec.org/advisories/RUSTSEC-2023-0080.html.
I think it’s more interesting that DuckDB is written in C++ and not rust than SQLite.
SQLite is old, huge and known for its gigantic test coverage. There’s just so much to rewrite.
DuckDB is from 2019, so new enough to jump on the “rust is safe and fast”
If I'm remembering a DuckDB talk I attended correctly, they chose C++ because they were most confident in their ability to write clear code in it which would be autovectorized by the compilers they were familiar with. Rust in 2019 didn't have a clear high level SIMD story yet and the developers (wisely) did not want to maintain handrolled SIMD code.
If maximum performance is a top objective, it is probably because C++ produces faster binaries with less code. Modern C++ specifically also has a lot of nice compile-time safety features, especially for database-like code.
I can’t verify those claims one way or another, but I’m interested to hear why they were downvoted.
if they write it on modern C++ then its alright tbh
The point about bounds checking in `safe' languages is well taken, it does prevent 100% test coverage. As we all agree, SQLite has been exhaustively tested, and arguments for bounds checking in it are therefore weakened. Still, that's not an argument for replicating this practice elsewhere, not unless you are Dr Hipp and willing to work very hard at testing. C.A.R. Hoare's comment on eliminating runtime checks in release builds is well-taken here: “What would we think of a sailing enthusiast who wears his life-jacket when training on dry land but takes it off as soon as he goes to sea?”
I am not Dr Hipp, and therefore I like run-time checks.
Ok, I didn't expect such a high praise for rust. I am not joking.
I can compile c anywhere and for any processor, which can’t be said for rust
> The C language is old and boring. It is a well-known and well-understood language.
So you might think, but there is a committee actively undermining this, not to mention compiler people keeping things exciting also.
There is a dogged adherence to backward compatibility, so that you can't pretend C has not gone anywhere in thirty-five years, if you like --- provided you aren't invoking too much undefined behavior. (You can't as easily pretend that your compiler has not gone anywhere in 35 years with regard to things you are doing out of spec.)
The fact that a C library can easily be wrapped by just about any language is really useful. We're considering writing a library for generating a UUID (that contains a key and value) for reasons that make sense to us and I proposed writing this in C so we could simply wrap it as a library for all of the languages we use internally rather than having to re-implement it several times. Not sure if we'll actually build this library but if we do it will be in C (I did managed to get the "wrap it for each language" proposal pre-approved).
You can expose a C interface from many languages (C++, Rust, C# to name a few that I've personally used). Instead of introducing a new language entirely, it's probably better to write the library in one of the languages you already use.
It is. You can also write it in C++ or Rust and expose a C API+ABI, and then you're distributing a binary library that the OS sees as very similar to a C library.
Occasionally when working in Lua I'd write something low-level in C++, wrap it in C, and then call the C wrapper from Lua. It's extra boilerplate but damn is it nice to have a REPL for your C++ code.
Edit: Because someone else will say it - Rust binary artifacts _are_ kinda big by default. You can compile libstd from scratch on nightly (it's a couple flags) or you can amortize the cost by packing more functions into the same binary, but it is gonna have more fixed overhead than C or C++.
> It is. You can also write it in C++ or Rust and expose a C API+ABI, and then you're distributing a binary library that the OS sees as very similar to a C library.
If I want a "C Library", I want a "C Library" and not some weird abomination that has been surgically grafted to libstdc++ or similar (but be careful of which version as they're not compatible and the name mangling changes and ...).
This isn't theoretical. It's such a pain that the C++ folks started resorting to header-only libraries just to sidestep the nightmare.
Rust libraries also impose an - in my opinion - unacceptable burden to the open source ecosystem: https://www.debian.org/releases/trixie/release-notes/issues....
This makes me less safe rather than more. Note that there is a substantial double standard here, we could never in the name of safety impose this level of burden from C tooling side because maintainers would rightfully be very upset (even toggling a warning in the default set causes discussions). For the same reason it should be unacceptable to use Rust before this is fixed, but somehow the memory safety absolutists convinced many people that this is more important than everything else. (I also think memory safety is important, but I can't help but thinking that pushing for Rust is more harmful to me than good. )
SQLite is a true landmark, c not withstanding it just happened to be the right tool at the right time and by now anything else is well not as interesting as what they have going on now; totally bucks the trend of throw away software.
This is ignoring the elephant in the room: SQLite is being rewritten in Rust and it's going quite well. https://github.com/tursodatabase/turso
It has async I/O support on Linux with io_uring, vector support, BEGIN CONCURRENT for improved write throughput using multi-version concurrency control (MVCC), Encryption at rest, incremental computation using DBSP for incremental view maintenance and query subscriptions.
Time will tell, but this may well be the future of SQLite.
It should be noted that project has no affiliation with the SQLite project. They just use the name for promotional/aspirational purposes. Which feels incredibly icky.
Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.
> They just use the name for promotional/aspirational purposes. Which feels incredibly icky.
The aim is to be compatible with sqlite, and a drop-in replacement for it, so I think it's fair use.
> Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.
It's MIT license open-source. And unlike sqlite, encourages outside contribution. For this reason, I think it can "win".
Calling it “SQLite-compatible” would be one thing. That’s not what they do. They describe it as “the evolution of SQLite”.
It’s absolutely inappropriate and appropriative.
They’ve been poor community members from the start when they publicized their one-sided spat with SQLite over their contribution policy.
The reality is that they are a VC-funded company focused on the “edge database” hypetrain that’s already dying out as it becomes clear that CAP theorem isn’t something you can just pretend doesn’t exist.
It’ll very likely be dead in a few years, but even if it’s not, a VC-funded project isn’t a replacement for SQLite. It would take incredibly unique advantages to shift literally the entire world away from SQLite.
It’s a new thing, not the next evolution of SQLite.
>>SQLite is being rewritten in Rust
SQLite is NOT being rewritten in Rust!
>>Turso Database is an in-process SQL database written in Rust, compatible with SQLite.
It's a ground up rewrite. It's not an official rewrite, if that's what you mean. Words are hard.
So a reimplementation, not a rewrite.
> Time will tell, but this may well be the future of SQLite.
turdso is VC funded so will probably be defunct in 2 years
Could also be an outcome. It is MIT open-source though.
Or, so it's being written mostly by AI.
In the link you provided, this is what I read: "An in-process SQL database, compatible with SQLite."
Compatible with SQLite. So it's another database?
It's a fork and a rewrite.
Yeah, I don't think it even counts as a fork - it's a ground-up re-implementation which is already adding features that go beyond the original.
So they have much worse test coverage than sqlite
so its sqlite++ since they added bunch of things on top of that
The moment turso becomes stable , SQLite will inevitably fade away with time if they don’t rethink how contributions should be taken. I honestly believe the Linux philosophy of software development will be what catapults turso forward.
These points strike me:
If the branch is never taken, and the optimizer can prove it, it will remove the check. Sometimes if it can’t actually prove it there’s ways to help it understand, or, in the almost extreme case, you do what I commented below.
Yeah I don't understand the argument. If you can't convince the compiler that that branch will never be taken, then I strongly suspect that it may be taken.
That's not the point. The point is that if it is never taken, you can't test it. They don't care that it inserts a conditional OP to check, they care that they can't test the conditional path.
But, there is no conditional path when the type system can assure the compiler that there is nothing to be conditional about. Do they mean that it's impossible to be 100% sure about if there's a conditional path or not?
A program can have many properties that the compiler cannot prove statically. To take a very basic case, the halting problem.
> Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
This is annoying in Rust. To me array accesses aren't the most annoying, it's match{} branches that will never been invoked.
There is unreachable!() for such situations, and you would hope that:
is recognised by the Rust tooling and just ignored. That's effectively the same as SQLite is doing now by not doing the check. But it isn't ignored by the tooling: unreachable!() is reported as a missed line. Then there is the test code coverage including the standard output by default, and you have to use regex's on path names to remove it.A more direct translation of the sqlite strategy here is to use get_unchecked instead of [], and then you get the same behaviors.
Your example does what [] does already, it’s just a more verbose way of writing the same thing. It’s not the same behavior as sqlite.
Turso:
https://algora.io/challenges/turso "Turso is rewriting SQLite in Rust ; Find a bug to win $1,000"
------
- Dec 10, 2024 : "Introducing Limbo: A complete rewrite of SQLite in Rust"
https://turso.tech/blog/introducing-limbo-a-complete-rewrite...
- Jan 21, 2025 - "We will rewrite SQLite. And we are going all-in"
https://turso.tech/blog/we-will-rewrite-sqlite-and-we-are-go...
- Project: https://github.com/tursodatabase/turso
Status: "Turso Database is currently under heavy development and is not ready for production use."
sqlite3 has one (apparently this is called "the amalgamation") c source file that is ~265 kloc (!) long with external dependencies on zlib, readline and ncurses. built binaries are libsqlite3.so at 4.8M and sqlite3 at 6.1M.
turso has 341 rust source files spread across tens of directories and 514 (!) external dependencies that produce (in release mode) 16 libraries and 7 binaries with tursodb at 48M and libturso_sqlite3.so at 36M.
looks roughly an order of magnitude larger to me. it would be interesting to understand the memory usage characteristics in real-world workloads. these numbers also sort of capture the character of the languages. for extreme portability and memory efficiency, probably hard to beat c and autotools though.
But if you don't have the bounds checks in machine code, then you don't have bounds checks.
I suppose SQLite might use a C linter tool that can prove the bounds checks happen at a higher layer, and then elide redundant ones in lower layers, but... C compilers won't do that by default, they'll just write memory-unsafe machine code. Right?
It's hard to argue with success. SQLite's pervasiveness is kind of a royal flush.
SQLite works great in Fil-C with minimal changes.
So, the argument for keeping SQLite written in C is that it gives the user the choice to either:
- Build SQLite with Yolo-C, in which case you get excellent performance and lots of tooling. And it's boring in the way that SQLite devs like. But it's not "safe" in the sense of memory safe languages.
- Build SQLite with Fil-C, in which case you get worse (but still quite good) performance and memory safety that exceeds what you'd get with a Rust/Go/Java/whatever rewrite.
Recompiling with Fil-C is safer than a rewrite into other memory safe languages because Fil-C is safe through all dependencies, including the syscall layer. Like, making a syscall in Rust means writing some unsafe code where you could screw up buffer sizes or whatnot, while making a syscall in Fil-C means going through the Fil-C runtime.
One thing I found especially interesting is the section at the end about why Rust isn’t used. It leaves open the door and at least is constructive feedback to the Rust community
For a project that is functionally “done” switching doesn’t make sense. Something like kernel code where you know it’ll continue to evolve - there going through the pain may be worth it
This is what I expected. Rust is the first thing that has been worth considering as a C replacement. C++ wasn't.
I wonder if the hype helps rust being a better language
At this point I wish the creators of the language could talk about what rust is bad at.
Folks involved often do! Talking about what’s not great is the only path towards getting better, because you have to identify pain points in order to fix them.
I would go as far as saying that 90% of managing the project is properly communicating, discussing and addressing the ways in which Rust sucks. The all-hands in NL earlier this year was wall to wall meetings about how much things suck and what to do about them! I mean this in the best possible way. ^_^
> Recoding SQLite in Go is unlikely since Go hates assert()
Any idea what this refers to? assert is a macro in C. Is the implication that OP wants the capability of testing conditions and then turning off the tests in a production release? If so, then I think the argument is more that go hates the idea of a preprocessor. Or have I misunderstood the point being made?
https://go.dev/doc/faq#assertions
Steve, thanks for taking the time to point me to this on-point passage.
Aren't SQLite’s bottlenecks primarily io-bound (not CPU)? If so, fopen, fread, or syscalls are the most important to performance and pure language efficiency wouldn't be limiter.
What's up with SQLite news lately? I feel like I see at least 1-2 posts about it per day.
> Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Talking about C99, or C++11, and then “oh you need the nightly build of rust” were juxtaposed in such a way that I never felt comfortable banging out “yum install rust” and giving it a go.
Other than some operating systems projects, I haven’t run into a “requires nightly” in the wild for years. Most users use the stable releases.
(There are some decent reasons to use the nightly toolchain in development even if you don’t rely on any unfinished features in your codebase, but that means they build on stable anyway just fine if you prefer.)
Good to know, maybe I’ll give it a whirl. I’d been under the (mistaken, apparently) impression that if one didn’t update monthly they were going to have a bad time.
You may be running into forwards compatibility issues, not backwards compatibility issues, which is what nightly is about.
The Rust Project releases a new stable compiler every six weeks. Because it is backwards compatible, most people update fairly quickly, as it is virtually always painless. So this may mean, if you don’t update your compiler, you may try out a new package version and it may use features or standard library calls that don’t exist in the version you’re using, because the authors updated regularly. There’s been some developments in Cargo to try and mitigate some of this, but since it’s not what the majority of users do, it’s taken a while and those features landed relatively recently, so they’re not widely adopted yet.
Nightly features are ones that aren’t properly accepted into the language yet, and so are allowed to break in backwards incompatible ways at any time.
But the original point "C99 vs something later" is also about forward compatibility issues.
I love him so much.
because Rust isnt out yet back then????
I don't want to sound cynical but a lot of it has to deal with the simplicity of the language. It's much harder to find a good Rust engineer than a C one. When all you have is pointers and structs it's much easier to meet the requirements for the role.
[dead]
[flagged]
I’d be curious to know what the creators of SQLite would have to say about Zig.
Zig gives the programmer more control than Rust. I think this is one of the reasons why TigerBeetle is written in Zig.
> Zig gives the programmer more control than Rust
More control over what exactly? Allocations? There is nothing Zig can do that Rust can’t.
> More control over what exactly? Allocations? There is nothing Zig can do that Rust can’t.
I mean yeah, allocations. Allocations are always explicit. Which is not true in C++ or Rust.
Personally I don't think it's that big of a deal, but it's a thing and maybe some people care enough.
> Which is not true in [] Rust.
...If you're using the alloc/std crates (which to be fair, is probably the vast majority of Rust devs). libcore and the Rust language itself do not allocate at all, so if you use appropriate crates and/or build on top of libcore yourself you too can have an explicit-allocation Rust (though perhaps not as ergonomic as Zig makes it).
I think zig generally composes better than rust. With rust you pretty much have to start over if you want reusable / composable code, that is not use the default std. Rust has small crates for every little thing because it doesn't compose well, as well to improve compile times. libc in the default std also is major L.
> I think zig generally composes better than rust.
I read your response 3 times and I truly don't know what you mean. Mind explaining with a simple example?
It mainly comes down how the std is designed. Zig has many good building blocks like allocators, and how every function that allocates something takes one. This allows you to reuse the same code for different kind of situations.
Hash maps in zig std are another great example, where you can use adapter to completely change how the data is stored and accessed while keeping the same API [1]. For example to have map with limited memory bound that automatically truncates itself, in rust you need to either write completely new data structure for this or rely on someone's crate again (indexmap).
Errors in zig compose also better, in rust I find error handling really annoying. Anyhow makes it better for application development but you shouldn't use it if writing libraries.
When writing zig I always feel like I can reuse pieces of existing code by combining the building blocks at hand (including freestanding targets!). While in rust I always feel like you need go for the fully tailored solution with its own gotchas, which is ironic considering how many crates there are and how many crates projects depend on vs. typical zig projects that often don't depend on lots of stuff.
1: https://zig.news/andrewrk/how-to-use-hash-map-contexts-to-sa...
> Nearly all systems have the ability to call libraries written in C. This is not true of other implementation languages.
From section "1.2 Compatibility". How easy is it to embed a library written in Zig in, say, a small embedded system where you may not be using Zig for the rest of the work?
Also, since you're the submitter, why did you change the title? It's just "Why is SQLite Coded in C", you added the "and not Rust" part.
The article allocates the last section to explaining why Rust is not a good fit (yet) so I wanted the title to cover that part of the conversation since I believe it is meaningful. It illustrates the tradeoffs in software engineering.
> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
From the site guidelines: https://news.ycombinator.com/newsguidelines.html
Also, Rust needs a better stdlib. A crate for every little thing is kinda nuts.
One reason I enjoy Go is because of the pragmatic stdlib. On most cases, I can get away without pulling in any 3p deps.
Now of course Go doesn’t work where you can’t tolerate GC pauses and need some sort of FFI. But because of the stdlib and faster compilation, Go somehow feels lighter than Rust.
Rust doesn’t really need a better stdlib as much as a broader one, since it is intentionally narrow. Go’s stdlib includes opinions like net/http and templates that Rust leaves to crates. The trade-off is Rust favors stability and portability at the core, while Go favors out-of-the-box ergonomics. Both approaches work, just for different teams.
Is Rust's stdlib worse than C's? It's not an argument here.
me when I dont know ball: