The combination of the NXP MCUXpresso IDE with the NXP MCU-Link Pro debug probe implements a nifty power and energy measurement tool (see New “MCU-Link Pro”: Debug Probe with Energy Measurement). The eclipse based IDE provides a dedicated view to inspect the data collected. It can export and import data, but it is in a binary format. In this article I present a way to export and then convert the data into .csv
or any other format for processing or visualizing it in different ways.
Using an open source command line tool, the binary data gets converted into a csv
format, which then can be consumed by many tools, e.g. gnuplot.
Outline
The NXP MCU-Link Pro debug probe with the MCUXpresso IDE is able to collect and export the measurement data. The exported data is in a binary format, as briefly explained in the IDE user manual, and suitable to import the data again into the IDE. What is missing is an export function into a format consumable by external tools.
In a university research project, the external research partner needed a way to process the data with his own tools. For this I wrote a converter, reading the binary data and writing it into a text file, suitable to be consumed by data visualization tools. The file converter is a simple command line utility, written in C, running both on Windows and Linux. The source code (a single C file 🙂 ) is open source with a permissible license and posted on GitHub (see link at the end of this article).
In the next sections, I’ll describe the binary data format and how I’m converting it. Based on this and the available source, you should be able to use the converter or use it for your own data conversion. And I’ll show an example data with the output of the data converter.
In any case, have a look at the code on Github (link at the end of the article).
Exporting and Importing Data in the IDE
The ‘Energy Measurement’ Eclipse view in MCUXpresso IDE comes with import and export buttons:
Export generates a zip file which can be imported again, or processed by other tools. The zip
contains two files: a .csv
file with header information, and the raw binary data:
CSV ‘Header’ Exported File
The important information in the .csv
file are two numbers: multiply and divide numbers:
The data values in the rawData.bin needs to be converted using these numbers. Below an example with the above values, using 0xd8b6f as input number:
0xd8b6f ==> (0xd8b6f * UnitMul) / UnitDiv ==> (887663 * 3) / 393216000 ==> 0.006772331237793
To help with this, I wrote several helpers:
static double Convert_to_mA(uint32_t adc) {
return ((double)(adc*1000*unitMul))/unitDiv;
}
static double Convert_double_to_mA(double val) {
return ((double)(val*1000.0*unitMul))/unitDiv;
}
static double Convert_double_to_us(double val) {
return ((double)(val*unitMul))/unitDiv;
}
Binary File Header
The exported binary file has a header, followed by the data (file shown with EHEP):
- Magic number (char[4]): characters to identify and mark the file, currently “$EMF”.
- Version (uint16_t): version number, currently 0x0001.
- Base (uint32_t): number of data samples, see later on.
- Step (uint64_t): seems to UI related only, so we can ignore it.
- SourceID (uint32_t): number to identify the data source (mA, …), present in the overview .csv
- <Data>: the data itself, with timestamps, data and summary records.
Below the verbose console output of the data converter for above data:
Read NXP power measurement binary data file and convert it to CSV. unitMul: 3 unitDiv: 393216000 open input file 'rawData.bin' input file size: 475154 bytes open output file 'data.csv' magic number: 0x24454d46 version: 0x0001 base: 0x00000010 step: 0x0000000000000010 sourceID: 0x00000004
Data
Data is stored in the following way:
- Timestamp (double (64bit)): time in us, as floating point
- base number of vales (uint32_t): ADC values, which need to be mul/div treated
Below the layout of the first data record, for a base of 16: after the timestamp, there are 16 4-byte data items:
The timestamp above (0x41 d9 c3 bc 7a c0 00 00) translates to the IEEE754 double 1729032683.0 in microseconds:
There is a time stamp at the start of the ‘base-number’ values. To know the timestamp for each data value one has to read at least two time stamps: the one at the beginning and the one at the end, and then distribute the time across the data values.
Below the verbose console output of the converter program: It reads the first time stamp, then (silently) the data items, followed by a summary record. Then it reads the second time stamp at file position 0x76, which allows it to assign the time stamp to the 16 previously read data items:
filepos: 0x0000001e timestamp: 1729032683.000000 us 16: 0x5e: ---------------------------- ==> min: 0xd5759, 6.670601 mA ==> avg: 6.739583 mA ==> max: 0xdc565, 6.885536 mA -------------------------------- filepos: 0x00000076 timestamp: 1729033003.000000 us 0: 0xd8b6f, 6.772331 mA, 1729032683.000000 us 1: 0xd6e10, 6.714966 mA, 1729032703.000000 us 2: 0xd7e82, 6.747086 mA, 1729032723.000000 us 3: 0xd594e, 6.674423 mA, 1729032743.000000 us 4: 0xd7cf1, 6.744026 mA, 1729032763.000000 us 5: 0xdc565, 6.885536 mA, 1729032783.000000 us 6: 0xd84c7, 6.759331 mA, 1729032803.000000 us 7: 0xd839a, 6.757034 mA, 1729032823.000000 us 8: 0xd67cc, 6.702728 mA, 1729032843.000000 us 9: 0xd897a, 6.768509 mA, 1729032863.000000 us 10: 0xd5759, 6.670601 mA, 1729032883.000000 us 11: 0xd7649, 6.731026 mA, 1729032903.000000 us 12: 0xd594e, 6.674423 mA, 1729032923.000000 us 13: 0xd81a4, 6.753204 mA, 1729032943.000000 us 14: 0xd81a4, 6.753204 mA, 1729032963.000000 us 15: 0xd7327, 6.724907 mA, 1729032983.000000 us
The data items are subject of a mul/div scaling:
double mA = Convert_to_mA(data.value);
#if CONFIG_LOG_DATA_TO_CONSOLE
printf("%d: 0x%x, %f mA\n", nofItems, data.value, mA);
#endif
Summary Records
After such a sequence of data values, there are summary records with
- minimum value (uint32_t), apply mul/div scaling
- average value (double, 64bit), apply mul/div scaling
- maximum value (uint32_t), apply mul/div scaling
💡 If you are wondering: the summary data make no sense in such a data file. They can make sense in the IDE GUI for zooming in/out. I simply read and ignore it.
I have marked a summary record below:
And here is the converter console output for above summary item:
16, 0x5e: ---------------------------- ==> min: 0xd5759, 6.670601 mA ==> avg: 6.739583 mA ==> max: 0xdc565, 6.885536 mA --------------------------------
And this is the code (simplified):
if (read32u(fp, &data.summary.min) != 0) {
printf("failed reading min\n");
return -1;
}
#if CONFIG_LOG_SUMMARY_TO_CONSOLE
printf("==> min: 0x%x, %f mA\n", data.summary.min, Convert_to_mA(data.summary.min));
#endif
if (ftell(fp)==fileSize) { /* the end of the file is without a summary */
printf("reached END\n");
break;
}
if (readFloat64(fp, &data.summary.avg) != 0) {
printf("failed reading avg\n");
return -1;
}
#if CONFIG_LOG_SUMMARY_TO_CONSOLE
printf("==> avg: %f mA\n", Convert_double_to_mA(data.summary.avg));
#endif
if (ftell(fp)==fileSize) { /* the end of the file is without a summary */
printf("reached END\n");
break;
}
if (read32u(fp, &data.summary.max) != 0) {
printf("failed reading max\n");
return -1;
}
#if CONFIG_LOG_SUMMARY_TO_CONSOLE
printf("==> max: 0x%x, %f mA\n", data.summary.max, Convert_to_mA(data.summary.max));
#endif
if (ftell(fp)==fileSize) { /* the end of the file is without a summary */
printf("reached END\n");
break;
}
} /* for nofSummaries */
#if CONFIG_LOG_SUMMARY_TO_CONSOLE
printf("--------------------------------\n");
#endif
Multiple Summary Records
There might be more than a single summary record, making skipping them not easy. So we need to have a deep dive on the number of summary records:
After a number of ‘base’ data items (e.g. 16), there is at least one summary item. For example for a base of 16:
data1 - data16 items summary (of last 16 (base) items) data17 - data32 summary (of last base items) data 33 …
Each summary covers the data of the previous n (base or 16) items.
After more items (exactly after data item number 256, or base2) there are two summary items:
... data256 summary (summary of last base (16) items) summary (summary of last base^2 (256) items) data257 ...data272 summary (of last base items) data273 - data288 summary (of last base items)
Notice that after that, there are single summary items following.
After every multiple of base2 data items, there is an extra summary covering the last base2 data items. It means that there are two summaries for 1*256, 2*256, 3*256,, 4*256, … . The extra summary covering the previous 256 data items.
data481 - data496 summary of last base items data497 - data512 summary of last base items summary of last 256 items data513 - data528 summary of last base items data529 -..
This goes on, until we hit the next level (base3 or 4096 for a base of 16): here we get yet an extra summary item, so in fact three:
data4081 - data4096 summary of base items summary of base^2 items summary of base^3 items data4097 - ...
Or in other words, for a base value of 16:
- After ‘base’ data items, there is a single summary item: 1 every 16 items
- After 16n and multiples of it, there is one extra summary item: so after item 2562 and multiple of it
- Again an extra summary after 163 data items 4096 multiple of it.
- and so on….
So I wrote a function to return the number of summary item, given a data item number, which is a multiple of base:
static int getNofSummaryItems(int dataItemIdx, int base) {
uint32_t log, val;
if ((dataItemIdx%base)!=0) {
return 0; /* must be multiple of base */
}
log = log_a_to_base_b(dataItemIdx, base); /* log to base */
while(log>1) {
val = powi32(base, log);
if (dataItemIdx==val) { /* matching (base^log)? */
return log;
}
/* else: check if multiple of log-1 */
if ((dataItemIdx%val)==0) {
return log;
}
log--;
}
return log;
}
With this function it is possible to know and skip the number of summary records :-).
End of Data
There is no information about the total number of items in the file, and no special stop marker for it. It seems that the data might end anytime during the data or summary reading. If I reach the end of the file or encounter a reading error, the program returns with an error code of -1.
Writing CSV Data
With all this, writing a .csv
(comma-separated-values) file is easy: The program writes the header with fprintf()
:
fprintf(outFile, "us,mA\n"); /* write CSV header */
and the data with fprintf()
:
us = Convert_double_to_us(data.timeStamp);
double mA = Convert_to_mA(data.value);
fprintf(outFile, "%f,%f\n", us, mA);
Below how it looks:
us,mA 1729032683.000000,6.772331 1729032703.000000,6.714966 1729032723.000000,6.747086 1729032743.000000,6.674423 1729032763.000000,6.744026 1729032783.000000,6.885536 1729032803.000000,6.759331 1729032823.000000,6.757034 1729032843.000000,6.702728 1729032863.000000,6.768509 1729032883.000000,6.670601 1729032903.000000,6.731026 1729032923.000000,6.674423 1729032943.000000,6.753204 1729032963.000000,6.753204 1729032983.000000,6.724907 1729033003.000000,6.750145 1729033023.000000,6.714966 1729033043.000000,6.753204 1729033063.000000,6.706551 1729033083.000000,6.759331 1729033103.000000,6.670601 1729033123.000000,6.740967 1729033143.000000,6.699669 ....
Command Line Interface
With the source file on Github, I have provided a make file to build it (assuming gcc with make):
make
Then run the executable which has -h (help option):
convert -h Read NXP power measurement binary data file and convert it to CSV. Use one of the following options: -h : prints this help -f <file> : input file, default "rawData.bin" -o <file> : output file, default "data.csv"
Visualizing
Having the data in .csv
format, many viewers can be used. For example Octave or Matlab. I can use Excel to show the data (if there are not too many data points):
A better use is one of the many online data viewers, e.g. https://www.csvplot.com/:
If you need some more fancy ways to show the data (zoom, filter, select, …), have a look at gnuplot. It is a powerful and fast command line tool.
# file: csv.gnuplot
set datafile separator ',' # CSV with comma as separator
set key autotitle columnhead # use the first line as title
set ylabel "Current (mA)" # label for the Y axis
set xlabel 'Time (us)' # label for the X axis
set style line 100 lt 1 lc rgb "grey" lw 0.5 # linestyle for the grid
set grid ls 100 # enable grid with specific linestyle
plot "data.csv" using 1:2 with lines
Running the script it with
gnuplot -p csv.gnulot
gives:
The tool offers tons of features presenting the data, it is worth to check out to the gnuplot tutorials and help pages on this.
Summary
Collecting power and energy data with the MCU-Link Pro has been very useful for me, especially for low-power applications. Now I can export, read and convert the data in a format, consumable with external tools. So I hope this is useful for you too.
Happy converting 🙂
PS: The converter is work-in-progress, so check out the latest version and development on GitHub.
Links
- Data Converter Code on GitHub: https://github.com/ErichStyger/mcuoneclipse/tree/master/MCU-Link/EnergyMeasurement
- NXP MCU-Link Pro web page: https://www.nxp.com/design/microcontrollers-developer-resources/mcu-link-pro-debug-probe:MCU-LINK-PRO
- New “MCU-Link Pro”: Debug Probe with Energy Measurement
- Visualizing Data with Eclipse, gdb and gnuplot