STOP putting your shitty dot-directories full of a mix of config, cache and data in my god damned home directory.
Concerned that not doing this will break things? Just check if you've dumped your crap in there already and if not then put it in the right places.
Worried about confusing existing users migrating to other machines? Document the change.
Still worried? How about this: I'll put a file called opt-into-xdg-base-directory-specification inside ${XDG_CONFIG_HOME:-$HOME/.config} and if you find that file you can rest assured I won't be confused.
>Debug information tends to be large and linking it slows down linking quite considerably. If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger, then this is wasted time.
Interesting. Is this true? In my work (java/kotlin, primarily in app code on a server, occasional postgres or frontend js/react stuff), I'm almost always reaching for a debugger as an enormously more powerful tool than println debugging. My tests are essentially the println, and if they fail for any interesting reason I'll want the debugger.
In my experience Java debuggers are exceptionally powerful, much more so than what I've seen from C/C++/Rust debuggers.
If I'm debugging some complicated TomEE application that might take 2 minutes to start up, then I'm absolutely reaching to an IntelliJ debugger as one of my first tools.
If I'm debugging some small command line application in Rust that will take 100ms to exhibit the failure mode, there's a very good chance that adding a println debug statement is what I'll try first
Sure. It’s got breakpoints, and conditional breakpoints, using the same engine as IntelliJ. It’s got evaluate, it’s got expression and count conditionals, it’s got rewind. It has the standard locals view.
Rust support has improved in 2024 pretty strongly (before this year it just shelled out to lldb); the expr parser and more importantly the variable viewer are greatly improved since January.
It can do any single expression, and the results are better than lldb, but it can’t do multiple statements and not everything in Rust can be one expression; you can’t use {} here
Hmm, I've only seen that in some 'new-ish' languages sometimes (like currently Zig), which I think is a compiler bug when generating debug info. In C/C++ I see such random stepping only when trying to debug an optimized program.
Depends on the developer. in the Practice of Programming https://en.m.wikipedia.org/wiki/The_Practice_of_Programming by Brian W. Kernighan and Rob Pike they say they use debuggers only to get a stack trace from a core dump and use printf for everything else. You can disagree but those are known very good programmers.
But what source code debuggers did they have available?
Other than gdb, I can't name any Unix C source code debuggers.
I believe they were working on Unix before GDB was created (wikipedia says gdb was created in 1986 - https://en.wikipedia.org/wiki/Gdb).
Plan 9 has acid, but from the man page and english manual, the debugger is closer to cli/tui than gui.
Printf debugging excels on environments where there's already good logging. Here I just need to pinpoint where in my logs things have already gone wrong and work my way backwards a bit.
You could do the same with a debugger setting up a breakpoint, but the logs can better surface key application-level decisions made in the process to get to the current bad state.
On a debugger I need to wind back on all functions, which it might get awful as some of them might be likely correct library calls that you need to skip over when going back in time, but that will take a huge portion of the functions called before the breakpoint.
I don't think it's impossible to do with a debugger, but logging sort of bypasses the process of telling the debugger what's relevant so it can hide the rest, and it might already be in your codebase, but there's no equivalent annotations already there in the code to help the debugger understand what's important.
To me printf helps surfacing the relevant application-level process to get to a broken state, and debuggers help understand hairy situations where things have gone wrong at a lower level, say missing fields or memory corruption, but these days with safer languages lower level issues should be way less frequent.
---
On a side-note, it doesn't help debuggers that going back in time was really hard with variable-length instructions. I might be wrong here, but it took a while until `rr` came out.
I do think that complexities like that resulted in spending too much time dealing with hairy details instead of improving the UI for debugging.
I really value debuggers. I have spent probably half my career solving problems that weren’t possible to solve with a debugger. When I can fall back on it, it helps me personally quite a bit.
I find them amazing, it's just that printf is unreasonably good given how cheap it is.
If I had the symbols, metadata, powerful debugger engine and a polished UI, I'll take that over printf everyday, but in the average situation printf is just too strong when fighting in mud.
Why not both? For any sufficiently complex app you will have some form of logging either way, and then you can further pinpoint the issue with a debugger.
Also, debuggers can do live evaluation of expressions, or do stuff like conditional breakpoints, or they can just simply add additional logs themselves. They are a very powerful utility.
Your existing logs will tell you roughly where and you just insert some more log lines to check the state of the data.
Depends how fast your build/run cycle is and how many different prcocesses/threads whether a debugger will be faster/easier but a lot of it just comes down to preference. Most time spent debugging for me at least is spent thinking about the probable cause then choosing what state to look at.
Faster than pressing F9 (to set a breakpoint on the current line) and then F5 (to start into the debugger)?
Printf-debugging has its uses, but they are very niche (for instance when you don't have access to a properly integrated debugger). Logging on the other hand is useful, but logs are only one small piece of the puzzle in the overall debugging workflow - usually only for debugging problems that slipped into production and when your code runs on a server (as opposed to a user machine).
I'm not against printf at all, my lifetime commit history is evidence of that. Do you also think that in the case of a coredump not existing, that printf is faster? Sincere question. I'm having an internal argument with myself about it at the moment and some outside perspective would be most welcome.
The only time I use a debugger with Rust is when unsafe code in some library crate messes up. My own code has no "unsafe". I have debug symbols on and a panic catcher that displays a backtrace in a popup window. That covers most cases.
Rust development is mostly fixing compile errors, anyway. Once it compiles, it often works the first time. What matters is compile time for error compiles, which is pretty good.
Incremental compile time for my metaverse client is 1 minute 8 seconds in release mode. That's OK. Takes longer to test a new version.
Debuggers are not only useful for actual debugging as in 'finding and fixing bugs', they are basically interactive program state explorers. Also "once it compiles, it works" is true for every programming language unless you're a complete newbie. The interesting bugs usually only manifest after your code is hammered by actual users and/or realworld data.
I also don't get it, debuggers as integral part of the programming workflow are a productivity multiplier. It does seem to be a fairly popular opinion in some programmer circles that step-debugging is useless, but I guess they never really used a properly integrated debugger to begin with (not a surprise tbh if all they know is gdb in the terminal).
That is why I found so great that Carmack's opinion on debuggers is similar to ours, at least there is some hope to educate the crowds that worship Carmack's achievements.
I use the debugger fairly regularly, though for me I'm on a stack where friction is minimal. In Go w/ VS Code, you can just write a test, set your breakpoints, hit "debug test", and you're in there in probably less than 20 seconds.
I am like you though, I don't typically resort to it immediately if I think I can figure out the problem with a quick log. And the times where I've not had access to a debugger with good UX, this tipping point can get pushed quite far out.
Debugging in Rust is substantially less common for me (and probably not only for me) because it is less often needed and more difficult - many things that are accessible in interpreted world don't exist in native binary.
I do care about usable tracebacks in error reports though.
Main challenge with debuggers in Rust is to map the data correctly into the complex type system. For this reason I rarely use debuggers, becase dbg! is superior in that sense.
println debugging is where everyone starts. Some people never graduate to knowing how to use a debugger.
Debugging through log data still has a place, of course. However, trying to do all of your debugging through println is so much harder, even though it feels easier than learning to use a debugger.
I wonder, do you use a separate debugger, or a debugger that's integrated into your IDE? "Reaching for a debugger" is just pressing F5 in an IDE.
E.g. I keep wondering whether the split between people who can't live without debuggers vs people who rarely use debuggers is actually people who use IDEs versus people who don't.
To be fair, if your code is multithreaded and sensitive to pauses, it becomes harder to debug with a debugger.
Ultimately, if you have a good logging setup and kinda know where the issue is a quick log message could be faster than debugging if all you want to do is look a variable value.
Logging can change timing issues though. There are too many cases where an added log statement "fixed" a race condition, simply by altering the timing/adding some form of synchronization inherent in the logging library.
printed/println debugging works if you wrote the code or have a good idea of where to go.
I frequently find myself debugging large unfamiliar code bases, and typically it’s much easier to stick a breakpoint in and start following where it goes rather than blindly start instrumenting with print statements and hoping that you picked the right code path.
I trained in the cout school of debugging. I can use a debugger, and sometimes do, but it's really hard to use a debugger effectively when you're also dealing with concurrency and network clients. Maybe one day, I'll learn how to use one of the time traveling debuggers and then I can record the problem and then step through it to debug it.
I came here to write exactly this .. if I was drinking something I would have spit it everywhere laughing when I read it.
I guess 'many developers' here probably refers to web developers who don't use the debugger, cause it's mostly useless/perpetually broken in JS land ..? I rely heavily on the debugger; can't imagine how people work without one.
> If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger
You lost me here.
Using a debugger to step through the code is a huge timesaver.
Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Engh... I used to rely on a debugger a lot 25 years ago when I was learning to program, and was extremely good at making it work and using its quirks; but, I was building overly-simplistic software, and it feels so rare of a thing to be useful anymore. The serious code I now find myself actually ever needing to debug, with multiple threads written in multiple languages all linked together--code which is often performance sensitive and even often involves networked services / inter-process communication--just isn't compatible with interactive debugging.
So like, sure: if I have some trivial one-off analysis tool I am building for which I am running into some issue I could figure out how to debug it, but even then I am going to have to figure out yet another debugging environment for yet another language, and also of course surmount the hassle of a ton of ensuring that sufficient debugging information is available and I'm running a build that is somehow not optimized enough for debugging and yet also not so slow that I'm gouging my eyes out, I could use a debugger, but I'd rather sit and stare at the code longer than start arguing with a debugger.
I wouldn't even start a new project before figuring out a proper debugging strategy for it. This also includes not using languages or language features that are not debuggable (stepping from one compiled language into another usually isn't a problem btw, debug information formats like DWARF or PDB are language-agnostic).
Your serious code isn't performance sensitive if it's a distributed monolith like you're describing.
It's just another wannabe big scale dumbsterfire some big brain architect thought up, thinking he's implementing a platform that's actively being used by millions of users simultaneously, aka Google's or a few social media sites.
Trace debugging can be inefficient, but it's also highly effective, portable between all languages, and requires no additional tooling. Hard to beat that combo.
> Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
That all depends on how long it takes to compile and run.
> If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Meh, it's fine.
You still have to set new ones and re-run your code to trigger the reproduction iteratively if you don't know where anything is wrong.
Clicking "add logging break point here" and adding a print statement is really not very different. In my experience the hard part is know where you want to look. Stepping through all your code line by line and looking at all values every step is not a quick way to do anything. You have to divide and conquer your debugging smartly, like a binary search all over your call sites in the huge tree-walk that is your program.
Stepping through code and adding breakpoints is a spectacular way to figure out where to put log statements. Modern code is so abstract that it’s nigh impossible to figure out what the fuck function even gets called just from looking at code.
Is it really saving time or are you not thinking enough about what is wrong until you stumble on an answer? I can't answer for you but I find that the forced wait for build also forces me to think and so I find the problem faster. It feels slower though but the clock is the real measure.
It's one click to set the breakpoint in the ide or one line if you're using gdb from the command line. I'm not sure how printf debugging could be quicker even if you didn't have to rebuild. Having done both, I'd take the debugger any day.
the important time is thinking time and debuggers don't help. Often they hurt because it is so sudctive to set those breakpoints instead of stophing to think about why.
Sometimes "just thinking harder" works, but often not. A debugger helps you understand what your code is actually doing, while your brain is flawed and makes flawed assumptions. Regardless of who you are, it's unlikely you will be manually evaluating code in your head as accurately as gdb (or whatever debugger you use).
I think a lot of linux/mac folks tend to printf debug, while windows folks tend to use a debugger, and I suspect it is a culture based choice that is justified post hoc.
However, few things have been better for my code than stepping through anything complex at least once before I move on (I used to almost exclusively use printf debugging).
There are a lot of different t views on debugging with a debugger vs print statements and what works better. This often seems to be based on user preference and familiarity. One thing that I haven’t seen mentioned is for issues with dependencies. Setting up a path dep in rust or fetching the code in whatever language you’re using for your project usually takes more time than simply adding some breakpoints in the library code.
It's a lot faster to add a log point in a debugger than to add a print statement and recompile. Especially with cargo check, I really don't see the point of non-debuggable builds (outside of embedded, but the size of debuginfo already makes that a non-starter).
2. Conditionally compiling some code like logging (not sure if matters for typical Rust projects, but for embedded C projects it's typical).
3. Conditionally compiling assertions to catch more bugs.
I'm using logs, because debugger breaks hardware. Very rarely do I need to reach debugger. Even when hard exception occurs, usually enough info is logged to find out the root cause of the bug.
Depends what you're working on. Stopped in an unfortunate place? That one element didn't get turned off and burned out. Or the motor didn't stop. Or the crucial interrupts got missed and your state is now reset. Or...
are there debugging tools specifically for situations like that?
do you just write code to test manually?
How do you ensure dev builds don't break stuff like that even without considering debugging?
You ensure dev builds don't break stuff like that with realtime programming techniques. Dev tools exist and they're usually some combination of platform specific, expensive, buggy, and fragile.
printf and friends are fantastic when applicable. Sometimes the cost to even do an async print or even building in any mode except stripped release is impossible though, which usually leads to !fun!.
The most useful tool is a full tracing system (basically a stream of run instructions you can use to trace the execution of the code without interrupting it), but unfortunately they're quite expensive and proprietery, and require extra connections to the systems that do support them, so they're not particularly commonly used. Most people just use some kind of home-grown logging/tracing system that tracks the particular state they're interested in, possibly logged into a ringbuffer which can be dumped when triggered by some event.
I mean that stopping execution will often break software logic, for example BLE connection will time out, SPI chip will stop communicate because of lack of commands, watchdog will reboot the chip, etc. And then the whole program will not work as expected, so debugging it will not make further sense. Sorry for miscommunication, I did not mean that hardware physically breaks. It might be possible to solve some of those issues, but generally printing is enough, at least that was my experience.
On embedded, debuggers almost never work until you get to really expensive ones.
In addition, debuggers tend to obscure the failure because they turn on all the hardware which tends to make bugs go away if they are related to power modes.
One of my "best" debuggers for embedded was putting an interactive interpreter over a serial interface on an interrupt so I could query the state of things when a device woke up even if it was hung--effectively a run time injectable "printf".
Crude, but it could trace down obscure bugs that were rare because the device would stay in the failure mode.
The bigeest problem was maintaining the database of code so that we knew exactly what build was on a device. We had to hash and track the universe.
I’ve worked with many M3s and M4s and some Cypress microchips and the JTAG debuggers always worked fine as far as I recall. There were some vendors that liked to force you to buy really expensive ancillary HW but a) there was plenty of OSS that worked fairly well b) you could pick which vendor you went with.
All of those chips you mentioned will turn on all the units at full power when you connect a debugger to them.
And the OSS stuff never works correctly. I wind up debugging the OSS stuff more than my own hardware. And I've used a LOT of OSS (to the point that I wrote software to use a Beaglebone as my SWD debugger to work around all the idiocies--both commercial and OSS).
Surely it would be better to use a debugger and avoid recompiling your code for the added print statements than to strip out the debug information to decrease build times.
I build wasm as a target and this is sadly not an option, I have to rebuild for each little bit. I will be trying a different linker and see how it goes though!
Just my lack of experience here, but I'm trying to verify I'm successfully using sold. The mold linker claims it leaves metadata in the .comment section, but on mach-o does that exist? Is there a similar evaluation command using objdump as the mold readelf command?
This reminds me...
STOP putting your shitty dot-directories full of a mix of config, cache and data in my god damned home directory.
Concerned that not doing this will break things? Just check if you've dumped your crap in there already and if not then put it in the right places.
Worried about confusing existing users migrating to other machines? Document the change.
Still worried? How about this: I'll put a file called opt-into-xdg-base-directory-specification inside ${XDG_CONFIG_HOME:-$HOME/.config} and if you find that file you can rest assured I won't be confused.
Thanks in advance!
This!
Interesting. Is this true? In my work (java/kotlin, primarily in app code on a server, occasional postgres or frontend js/react stuff), I'm almost always reaching for a debugger as an enormously more powerful tool than println debugging. My tests are essentially the println, and if they fail for any interesting reason I'll want the debugger.
If I'm debugging some complicated TomEE application that might take 2 minutes to start up, then I'm absolutely reaching to an IntelliJ debugger as one of my first tools.
If I'm debugging some small command line application in Rust that will take 100ms to exhibit the failure mode, there's a very good chance that adding a println debug statement is what I'll try first
Last time I debugged Rust with CLion/RustRover, the debugger was the same as VSCode uses.
Rust support has improved in 2024 pretty strongly (before this year it just shelled out to lldb); the expr parser and more importantly the variable viewer are greatly improved since January.
It can do any single expression, and the results are better than lldb, but it can’t do multiple statements and not everything in Rust can be one expression; you can’t use {} here
...have you ever used the Visual Studio integrated debugger for C/C++ instead of 'raw' gdb/lldb without a UI frontend?
Other than gdb, I can't name any Unix C source code debuggers. I believe they were working on Unix before GDB was created (wikipedia says gdb was created in 1986 - https://en.wikipedia.org/wiki/Gdb).
Plan 9 has acid, but from the man page and english manual, the debugger is closer to cli/tui than gui.
see https://9fans.github.io/plan9port/man/man1/acid.html and https://plan9.io/sys/doc/acid.html
https://en.wikipedia.org/wiki/Advanced_Debugger
You could do the same with a debugger setting up a breakpoint, but the logs can better surface key application-level decisions made in the process to get to the current bad state.
On a debugger I need to wind back on all functions, which it might get awful as some of them might be likely correct library calls that you need to skip over when going back in time, but that will take a huge portion of the functions called before the breakpoint. I don't think it's impossible to do with a debugger, but logging sort of bypasses the process of telling the debugger what's relevant so it can hide the rest, and it might already be in your codebase, but there's no equivalent annotations already there in the code to help the debugger understand what's important.
To me printf helps surfacing the relevant application-level process to get to a broken state, and debuggers help understand hairy situations where things have gone wrong at a lower level, say missing fields or memory corruption, but these days with safer languages lower level issues should be way less frequent.
---
On a side-note, it doesn't help debuggers that going back in time was really hard with variable-length instructions. I might be wrong here, but it took a while until `rr` came out.
I do think that complexities like that resulted in spending too much time dealing with hairy details instead of improving the UI for debugging.
I really value debuggers. I have spent probably half my career solving problems that weren’t possible to solve with a debugger. When I can fall back on it, it helps me personally quite a bit.
If I had the symbols, metadata, powerful debugger engine and a polished UI, I'll take that over printf everyday, but in the average situation printf is just too strong when fighting in mud.
Also, debuggers can do live evaluation of expressions, or do stuff like conditional breakpoints, or they can just simply add additional logs themselves. They are a very powerful utility.
Depends how fast your build/run cycle is and how many different prcocesses/threads whether a debugger will be faster/easier but a lot of it just comes down to preference. Most time spent debugging for me at least is spent thinking about the probable cause then choosing what state to look at.
Printf-debugging has its uses, but they are very niche (for instance when you don't have access to a properly integrated debugger). Logging on the other hand is useful, but logs are only one small piece of the puzzle in the overall debugging workflow - usually only for debugging problems that slipped into production and when your code runs on a server (as opposed to a user machine).
though you should note that I'm repeating their claims. What I think is hidden.
I've had to do that on a embedded system that didn't support debugging. It was hell.
Rust development is mostly fixing compile errors, anyway. Once it compiles, it often works the first time. What matters is compile time for error compiles, which is pretty good.
Incremental compile time for my metaverse client is 1 minute 8 seconds in release mode. That's OK. Takes longer to test a new version.
I am like you though, I don't typically resort to it immediately if I think I can figure out the problem with a quick log. And the times where I've not had access to a debugger with good UX, this tipping point can get pushed quite far out.
I do care about usable tracebacks in error reports though.
Debugging through log data still has a place, of course. However, trying to do all of your debugging through println is so much harder, even though it feels easier than learning to use a debugger.
I reach for a “real” debugger when necessary, but that’s less than 5% of the time.
E.g. I keep wondering whether the split between people who can't live without debuggers vs people who rarely use debuggers is actually people who use IDEs versus people who don't.
Ultimately, if you have a good logging setup and kinda know where the issue is a quick log message could be faster than debugging if all you want to do is look a variable value.
I frequently find myself debugging large unfamiliar code bases, and typically it’s much easier to stick a breakpoint in and start following where it goes rather than blindly start instrumenting with print statements and hoping that you picked the right code path.
I guess 'many developers' here probably refers to web developers who don't use the debugger, cause it's mostly useless/perpetually broken in JS land ..? I rely heavily on the debugger; can't imagine how people work without one.
You lost me here.
Using a debugger to step through the code is a huge timesaver.
Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
So like, sure: if I have some trivial one-off analysis tool I am building for which I am running into some issue I could figure out how to debug it, but even then I am going to have to figure out yet another debugging environment for yet another language, and also of course surmount the hassle of a ton of ensuring that sufficient debugging information is available and I'm running a build that is somehow not optimized enough for debugging and yet also not so slow that I'm gouging my eyes out, I could use a debugger, but I'd rather sit and stare at the code longer than start arguing with a debugger.
It's just another wannabe big scale dumbsterfire some big brain architect thought up, thinking he's implementing a platform that's actively being used by millions of users simultaneously, aka Google's or a few social media sites.
That all depends on how long it takes to compile and run.
> If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Meh, it's fine.
You still have to set new ones and re-run your code to trigger the reproduction iteratively if you don't know where anything is wrong.
Clicking "add logging break point here" and adding a print statement is really not very different. In my experience the hard part is know where you want to look. Stepping through all your code line by line and looking at all values every step is not a quick way to do anything. You have to divide and conquer your debugging smartly, like a binary search all over your call sites in the huge tree-walk that is your program.
Debuggers are pretty good when you want to understand someone else's code.
I think a lot of linux/mac folks tend to printf debug, while windows folks tend to use a debugger, and I suspect it is a culture based choice that is justified post hoc.
However, few things have been better for my code than stepping through anything complex at least once before I move on (I used to almost exclusively use printf debugging).
This is too slow and manual of a process to be a huge timesaver, you'd need a "time-travel" debugging capability that eliminate these to save time
2. Conditionally compiling some code like logging (not sure if matters for typical Rust projects, but for embedded C projects it's typical).
3. Conditionally compiling assertions to catch more bugs.
I'm using logs, because debugger breaks hardware. Very rarely do I need to reach debugger. Even when hard exception occurs, usually enough info is logged to find out the root cause of the bug.
What? Seems like you’re talking about embedded but I’ve done a lot of embedded projects in my time and I’ve never had a debugger that breaks the HW.
printf and friends are fantastic when applicable. Sometimes the cost to even do an async print or even building in any mode except stripped release is impossible though, which usually leads to !fun!.
In addition, debuggers tend to obscure the failure because they turn on all the hardware which tends to make bugs go away if they are related to power modes.
One of my "best" debuggers for embedded was putting an interactive interpreter over a serial interface on an interrupt so I could query the state of things when a device woke up even if it was hung--effectively a run time injectable "printf".
Crude, but it could trace down obscure bugs that were rare because the device would stay in the failure mode.
The bigeest problem was maintaining the database of code so that we knew exactly what build was on a device. We had to hash and track the universe.
And the OSS stuff never works correctly. I wind up debugging the OSS stuff more than my own hardware. And I've used a LOT of OSS (to the point that I wrote software to use a Beaglebone as my SWD debugger to work around all the idiocies--both commercial and OSS).
GitHub repo: https://github.com/davidlattimore/wild
List of his blog posts, a couple of which are about wild: https://davidlattimore.github.io/