Debug vs. Release?

When I create a project with CodeWarrior for MCU using Processor Expert and S08GB60, I’m asked if I want to have a Debug and/or Release configuration:

Debug or Release in Processor Expert

Debug or Release in Processor Expert

Debug or Release? For an embedded microcontroller? Does this make any sense?

Debug and Release in the Desktop World

In the Desktop World, ‘Debug’ and ‘Release’ builds have following typical meaning:

  • Debug builds have debugging and symbolic information included. The compiler is not optimizing to make debugging ‘easier’.
  • Release builds have the debugging and symbolic information (Dwarf in ELF/Dwarf files) stripped off. Optimizations are enabled for best performance or code density.

So the idea is that you have something easy to develop and debug. And once everything works, you create the optimized Release version and ship. That makes much sense, especially as the binary file size is greatly reduced with the debug information stripped off. Additionally loading that file is faster too, as smaller in size. So a lot of good reasons to create a Release version.

The problems start if it works in the Debug version, and everything crumbles to dust in the Release one (see Surviving the Release Version).

To the end: Debug and Release versions make sense because of the binary file size. If you are not debugging, you do not need the debug information with the binary loaded.

Debug and Release in the Embedded World

Embedded Debugging is different: If you debug your application, then the debug information remains on the host, while only the code gets downloaded to the target. The debug information is not loaded into the target memory. This means that advantage of a release version does not exist here.

Oh, wait: there might be still good reasons.

  • Bootloader: If I’m using a loader on the target which can load other applications, then having stripped of the debug information reduces the file size. That way I can load files faster.
  • Embedded Linux: If my application loads libraries or executables, then having the debug information removed saves loading time and space in RAM.
  • Making Reverse Engineering harder: removing the debug information pretty much means decoding and debugging things on assembly level. That makes reverse engineering harder. But only harder, but does not prevent it. So I see little value in this use case.

Debug Information (Symbolics)

How to remove the debug information? Typically the linker and/or the compiler offer an option to either remove the information or not to generate it:

Generate Debugging Information

Generate Debugging Information (or not)

And what about the other aspect: the compiler optimizations? Does ‘Debug’ not mean that the compiler optimizations are disabled so I still can debug? And that it is not possible to debug optimized code?

My thinking is: I use and debug with the compiler options I intend to release the software. I don’t want to be in the situation that I have two different versions of my applications. This not only increases the efforts, but as well the risks. What if one works, but not the other? So I set the compiler optimizations to the level I want them, and use that for the development and for the release.

Optimization

But then someone might say “oh, but you cannot debug optimized code”. I might need to inspect the registers or the assembly code to see what is really going on. But (normally) this is not the fault of the debugger, but of the compiler. The ELF/Dwarf object and debug format is sophisticated enough, that it allows to describe information for highly optimized code. So compilers could (!) generate debugging information for highly optimized code, if they just would do it :-(. From my previous life as a compiler engineer I know it is hard work to keep the right debugging information through all the optimization stages, but it is possible. And it is hard work. It is good that compilers are optimizing for code size and code speed, but a compiler which does not produce the correct debugging information for highly optimized code is simply not state of the art any more. There are a lot of ‘ease-of-use’ efforts going on in the industry, but I think this needs to start as well in the compilers to produce correct debugging information to make highly optimized code easier to use.

Summary

  • For Embedded (with the exception of things like Embedded Linux) ‘Debug’ or ‘Release’ versions do not make sense.
  • I debug with the same compiler options as I will deliver my project at the end. That way I do not need to keep two different set of options or binaries, and I’m debugging what I’m getting at the end.
  • It should be possible to debug highly optimized code. In my experience it is just the compiler (and rarely the debugger) which fails to do it correctly. I wish compilers would get better in producing the correct debugging information.

Happy Releasing 🙂

PS: if you are wondering what is the difference between the ‘Release’ and ‘Debug’ Processor Expert configuration as shown for the S08GB60A at the beginning of this post: In the ‘Release’ configuration the debug port will be disabled in the microcontroller configuration using a configuration register. That register will be written during startup of the processor. You will not be able to debug the device after that point ;-).

17 thoughts on “Debug vs. Release?

  1. Pingback: DIY Free Toolchain for Kinetis: Part 2 – Eclipse IDE | MCU on Eclipse

  2. Pingback: Tutorial: Data Logger with the FRDM-K64F Board | MCU on Eclipse

  3. Pingback: RAM Target with Kinetis Design Studio and FRDM-K64F | MCU on Eclipse

  4. Pingback: Dealing with Code Size in Kinetis SDK v2.x Projects | MCU on Eclipse

  5. Pingback: “No source available”, or how to Debug Multiple Binaries with GDB and Eclipse | MCU on Eclipse

  6. Hi Erich,

    I asked this question of IAR as I’m using the IAR EWB IDE and they responded with the following which I think still has a bearing on using CodeWarrior for MCU using Processor Expert


    IAR Embedded Workbench for ARM provides two default configurations of projects:

    Release and Debug

    From Help > IDE Project Management and Building Guide:

    “The only differences are the options used for optimisation, debug
    information, and output format. In the Release configuration, the preprocessor symbol
    NDEBUG is defined, which means the application will not contain any asserts”.

    These configurations are only created by default, almost as an example, so
    that it is easy to switch between Debug and “non-debug” builds.

    When you are ready to release, you often want to:

    1.
    Make sure that there is no debug info included in the produced binary for security reasons.
    Since you use srec/bin, there is no debug info included anyway.

    2.
    Make sure there are no asserts or debug-probe specific code included in the application.

    For example, in Release mode, you don’t want printf to be redirected
    to the Terminal I/O window via Semihosting.
    (Project > Options > General Options > Library Configuration > stdout/stderr:
    ‘Via semihosting’). If you run semihosted printf without a debugger, the application
    may hang, see:

    https://www.iar.com/support/tech-notes/debugger/application-does-not-run-stand-alone/

    So, I don’t agree with the mcu on eclipse article, since the author forgot about debug-probe/debugger
    specific handling of for example printf.

    When debugging with a default Debug configuration, the Terminal I/O printf option is nice to have.
    With ‘Via SWO’ printfs are sent out even when a debugger is not connected.

    3.
    Increase optimisation perhaps. Since it is difficult to debug when optimization is set to high,
    (this is a fact, see Help > C-SPY Debugging Guide, chapter ‘Effects of optimizations’ for details),
    the Debug configuration is by default set to a low optimization level.

    Of course you can modify and/or create new configurations that suits well both
    for debugging and release in your specific application. Hope this helps!

    I then went on to ask the following question:

    If my application runs fine with optimisation off and I have plenty of memory on chip
    for the code… why would I want to use optimisation?

    That is a very good question!

    One reason would be optimisation for speed (not size).
    See Help > C/C++ Development Guide, chapter ‘SPEED VERSUS SIZE’.

    “speed will trade size for speed, whereas size will trade speed for size”

    For example, you might have an application that should :
    Wake up from sleep, perform something as fast as possible, then go back to sleep.

    In such a case, you would want the application to be as fast as possible, so that
    the CPU can sleep longer and save battery life for example.

    If you have a large flash and such an application, it is not important that code size
    is small, so you can enable all speed optimisations with the –no_size_constraints option too.
    This means for that loops may be rolled out and functions may be inlined, which
    results in larger but faster code.

    Otherwise, I can’t say why you would want to optimise.

    * Perhaps for testing, so that you know that the code works with high optimisation too.
    (Often C language violations (like faulty casts) are detected with optimisation set to
    high. The code may work by chance with no optimisations).

    * Perhaps if you use a boot-loader or over-the-air/internet FW updates, then it could
    be important to reduce the size of the binary?

    I hope this helps and would be interested in your response?

    Like

    • There are always cases where using different build configurations make sense, like ‘application with advanced features’ or ‘application with reduced functionality’ or something like that. I’m against using a ‘debug’ configuration for debugging/testing and then using a ‘release’ configuration for shipping that code, this is the wrong mindset and approach in my view. I want to eat my own dog food and develop on/with what actually ends up in the product. Developing with the ‘final’ setting inherently adds a lot of informal test coverage. If it is about stripping off debug information, this always can be done in a separate step.

      Like

  7. Hi Erich, I’m curious that you conclude that many tools don’t currently generate the debug info needed to allow for “proper” debugging of optimised code yet you say that you manage to debug your optimised production program image.
    How so?
    Surely you have to compromise on optimisation options and/or accurate debugging (e.g. accurate source/asm line tracking)?
    I can’t see how there cannot be a trade-off or conflicting goals here with most tools currently available – in particular GCC tools (not sure if llvm or commercial tools are significantly better in this respect?).

    Like

    • Because I have written different compiler front ends and many back ends, I know how hard it is to generate the correct debug information for highly optimized code. It is a problem for the compiler writer, as it is not easy. I see that it seems that many compiler writers made shortcuts or were lacy and did not route through the full debug information. Because what Darf2 and Dwarf3 would provide would allow you to debug very well highly optimized code. So it should be possible to easily debug highly optimized code. Actually gcc does a pretty good job lately. At the end, I still look at the assembly code if I have any doubts what is going on.

      Like

      • Thanks but you still say “it should be possible” to debug highly optimised code whereas my experience is that it’s not – at least not without significant issues or tradeoffs such as foregoing certain types of optimisations and/or putting up with issues (in particular inaccurate line tracking) while debugging. I can’t see how you can successfully and interactively debug code using say -O3 or -Os? Unless you use printf debugging or something like that?

        Like

        • Yes, I’m debugging sucessfully highly optimized code with -O3 or -Os. And I do *not* use printf() as this will completely screw up everything, as printf() is so intrusive and will even change the code generated around the printf() call. I’m using as well trace tools for certain parts. So debugging highly optimized code from gcc *is* possible. But it could be better if the compiler would use all the Dwarf2 and Dwarf3 features which would add more debug information.

          Like

      • (Can’t reply to your reply so I have to reply to your earlier post…!).

        Thanks for the info. But when debugging code compiled with -O3 or -Os how does your line tracking etc. work? In my experience it doesn’t work very well – at least with gcc tools. Are you using something other than gcc in this case?

        Like

  8. Pingback: Build Configurations in Eclipse | MCU on Eclipse

  9. I am using Codewarrior for MCU for BLDC motor control development on the S12Z MagniV.
    In between my actual code, I have inserted lots of debugging code (like toggling a GPIO or logging data in an array etc.) on certain conditions.

    Currently I manually enable or disable (comment out) a pre-processor macro which is used as a compile switch (#if..#else..#endif) to enable/disable the debug code. Instead of modifying code every time, I want to enable or disable all such macros simply by selecting the appropriate config (Debug or Release).

    Is there a way to do this?

    Like

  10. Pingback: assert(), __FILE__, Path and other cool GNU gcc Tricks to be aware of | MCU on Eclipse

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.