May 24, 2011 AT 11:12 am

ATMega Memory Usage

JeeLabs has a very informative post about memory usage on the ATMega, along with some Arduino sample code and an explanation of how bad memory management can cause sketches to fail. More from Jean-Claude:

Sometimes, it’s useful to find out how much memory a sketch uses.

Sometimes, it’s essential do so, i.e. when you’re reaching the limit. Because strange and totally unpredictable things happen once you run out of memory.

Running out of RAM space is the nasty one. Because it can happen at any time, not necessarily at startup, and not even predictably because interrupt routines can trigger the problem.

There are three areas in RAM:

  • static data, i.e. global variables and arrays … and strings !
  • the “heap”, which gets used if you call malloc() and free()
  • the “stack”, which is what gets consumed as one function calls another

The heap grows up, and is used in a fairly unpredictable manner. If you release areas, then they will be lead to unused gaps in the heap, which get re-used by new calls to malloc() if the requested block fits in those gaps.

At any point in time, there is a highest point in RAM occupied by the heap. This value can be found in a system variable called __brkval.

The stack is located at the end of RAM, and expands and contracts down towards the heap area. Stack space gets allocated and released as needed by functions calling other functions. That’s where local variables get stored.

Check out his post for the code and further explanation.


Check out all the Circuit Playground Episodes! Our new kid’s show and subscribe!

Have an amazing project to share? Join the SHOW-AND-TELL every Wednesday night at 7:30pm ET on Google+ Hangouts.

Join us every Wednesday night at 8pm ET for Ask an Engineer!

Learn resistor values with Mho’s Resistance or get the best electronics calculator for engineers “Circuit Playground”Adafruit’s Apps!



2 Comments

  1. Just use PROGMEM for const strings.

  2. “I would hope that… but it probably doesn’t.”

    Well, actually, it does. This optimization is called “constant folding”, and all decent compilers/linkers do it. I knew that pcc-based compilers had been doing this for decades, but wasn’t able to confirm that GCC did in a quick once-over of the documentation, but it certainly appears to.

    Many “real” embedded systems don’t use a garden-variety libc-style heap, but do their own memory-management themselves with custom memory pools that take all of the non-determinism out of heap usage. They do this for the simple reason is that’s generally impossible to guarantee that your system will not fail otherwise, assuming that you care about that sort of thing.

Sorry, the comment form is closed at this time.