Category Archives: tools

Updating an NXP MCU-Link to work with OpenOCD

Not just OpenOCD, but just.. get it updated.

MCU-Link is a USB-HS “CMSIS-DAP” compatible debug interface, cheaply available. It has a 2x5x1.27mm cortex debug connector, a bonus usb2uart TTL on 2.54mm pins, and even has the 1.27mm 10pin cable in the little case. Supports SWO trace at ~9M they say. Excellent!

As shipped it was running a very old firmware, older than they even have in their release notes. MCU-LINK r0FF CMSIS-DAP V0.078 OpenOCD didn’t even recognise this.

NXP Offers a “MCU-Link installer for Linux” which at the time of writing was v3.128. which certainly sounds more up to date. I downloaded that, found a shell script that wanted to be run as root…. So, with less commentary, here’s how I jumped all the hoops to upgrade the MCU-Link to recent, and get it working properly…

$ sh MCU-LINK_installer_3.128.x86_64.deb.bin --noexec --target .
$ ar x MCU-LINK_installer_3.128.x86_64.deb
$ tar -xf data.tar.gz
$ cd usr/local/MCU-LINK_installer_3.128/
# Now, plug in the "firmware update" jumper and replug the dongle
# This flashes the firmware itself...
$ bin/blhost --usb 0x1fc9,0x0021 flash-image probe_firmware/MCU-LINK_CMSIS-DAP_V3_128.s19 erase 0

# and here we "configure" it, leaving it as unlocked as we can...
$ bin/blhost --usb 0x1fc9,0x0021 read-memory 0x9dc00  512 myoutdump.bin 0
$ bin/mculink_cfg myoutdump.bin --clear --no-device_vendor --no-device_name --no-board_vendor --no-board_name
$ bin/blhost --usb 0x1fc9,0x0021 flash-erase-region 0x9dc00  512
$ bin/blhost --usb 0x1fc9,0x0021 write-memory 0x9dc00  myoutdump.bin  0

Now, unplug, take off the jumper, and replug. You now have a much more standard looking USB descriptors, and running Product: MCU-LINK (r0FF) CMSIS-DAP V3.128

And… now it happily supports non-NXP parts too.

$ openocd -f interface/cmsis-dap.cfg  -f target/stm32g4x.cfg 
Open On-Chip Debugger 0.12.0+dev-00469-g1b4afd13f (2024-01-08-22:49)
Licensed under GNU GPL v2
For bug reports, read
	http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "swd". To override use 'transport select <transport>'.
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : Using CMSIS-DAPv2 interface with VID:PID=0x1fc9:0x0143, serial=X1JALQM40K3MS
Info : CMSIS-DAP: SWD supported
Info : CMSIS-DAP: JTAG supported
Info : CMSIS-DAP: SWO-UART supported
Info : CMSIS-DAP: Atomic commands supported
Info : CMSIS-DAP: Test domain timer supported
Info : CMSIS-DAP: UART via USB COM port supported
Info : CMSIS-DAP: FW Version = 2.1.1
Info : CMSIS-DAP: Serial# = X1JALQM40K3MS
Info : CMSIS-DAP: Interface Initialised (SWD)
Info : SWCLK/TCK = 0 SWDIO/TMS = 1 TDI = 0 TDO = 0 nTRST = 0 nRESET = 1
Info : CMSIS-DAP: Interface ready
Info : clock speed 2000 kHz
Info : SWD DPIDR 0x2ba01477
Info : [stm32g4x.cpu] Cortex-M4 r0p1 processor detected
Info : [stm32g4x.cpu] target has 6 breakpoints, 4 watchpoints
Info : [stm32g4x.cpu] Examination succeed
Info : starting gdb server for stm32g4x.cpu on 3333
Info : Listening on port 3333 for gdb connections

Later, I’ll test the actual trace, just wanted to write these down while I still had it all.

Jenkins agent systemd unit file

So, you try to setup an agent, and instead of getting a nice package or anything, you get told, “run this long java command line app” …. wat. Here’s a (very) basic system unit file to run your agent. You can copy the command lines from what jenkins provides itself.

$ cat /etc/systemd/system/jenkins-agent.service 
[Unit]
Description=My radical Jenkins agent
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/bin/java -jar /var/opt/jenkins-agent/agent.jar -jnlpUrl https://radical.example.org/computer/blah/slave-agent.jnlp -secret @/var/opt/jenkins-agent/my.jenkins.secret-file -workDir /var/opt/jenkins-agent
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

(And remember, JDK8 for your agent, until https://issues.jenkins-ci.org/browse/JENKINS-63313 is fixed)

Then, just systemctl enable jenkins-agent and systemctl start jenkins-agent.

Using NetBeans for STM32 development with OpenOCD

This post supersedes http://false.ekta.is/2012/05/using-netbeans-for-stm32-development-with-stlink-texane/ it has been updated for 2016 and current best software tools.  Again, this is only focussed on a linux desktop environment.

Required pieces haven’t changed much, you still need:

  1. A tool chain.
  2. GDB Server middleware (OR just a tool flashing if you want to live in the dark ages)
  3. A “sexy” IDE (If you disagree on wanting an IDE, you’re reading the wrong post)

Getting a toolchain

New good advice

Advice here hasn’t changed.  The best toolchain is still gcc-arm-embedded  They steadily roll out updates, it’s the blessed upstream of all ARM Cortex GCC work, and it has proper functional multilib support and proper functional documentation and bug reporting.  It even has proper multi platform binaries for windows and macs.  Some distros are packaging “arm-none-eabi-XXXXX” packages, but they’re often old, repackages of old, poorly packaged or otherwise broken.  As of November 2015 for instance, ArchLinux was packaging a gcc 5.2 binary explicitly for arm-cortex, that did not support the -mmcu=cortex-m7 option added in gcc 5.x series.  Just say no.

I like untarring the binaries to ~/tools and then symlinking to ~/.local/bin, it avoids having to relogin or start new terminals like editing .bashrc and .profile does.
~/.local/bin$ ln -s ~/tools/gcc-arm-none-eabi-5_2-2015q4/bin/arm-none-eabi-* .

Old bad advice

The internet is (now) full of old articles recommending things like “summon-arm-toolchain” (Deprecated by the author even) “code sourcery (lite)” (CodeSourcery was bought by Mentor, and this has been slowly killed off.  Years ago, this was a good choice, but all the work they did has long since been usptreamed)  You can even find advice saying you need to compile your own.  Pay no attention to any of this.  It’s well out of date.

GDB Server middleware

New good advice

Get OpenOCD. Make sure it’s version 0.9 or better. 0.8 and 0.7 will work, but 0.9 is a _good_ release for Cortex-M and STM32 parts. If your distro provides this packaged, just use it. Fedora 22 has OpenOCD 0.8, Fedora 23 has OpenOCD 0.9. Otherwise, build it from source

Old bad advice

Don’t use texane/stlink.  Just don’t.  It’s poorly maintained, regularly breaks things when new targets are introduced and not nearly as flexible as OpenOCD.  It did move a lot faster than OpenOCD in the early days, and if you want a simpler code base to go and hack to pieces for this, go knock yourself out, but don’t ask for help when it breaks.

Netbeans

No major changes here, just some updates and dropping out old warnings.  You should still setup your toolchain in netbeans first, it makes the autodetection for code completion much more reliable.  I’ve updated and created new screenshots for Netbeans 8.1 the latest current release.

First, go to Tools->Options->C/C++->Build Tools and Add a new Toolchain…

Adding a new toolchain to netbeans

Adding a new toolchain to netbeans

Put in the “Base directory” of where you extracted the toolchain.  In theory netbeans uses the base directory and the “family” to autodetect here, but it doesn’t seem to understand cross tools very well.

Base path for new toolchain and name

Base path for new toolchain and name

Which means you’ll have to fill in the names of the tools yourself, as shown below.  Click on “Versions” afterwards to make sure it all works.

Adding explicit paths to netbeans toolchains

Adding explicit paths to netbeans toolchains

“Versions” should show you the right thing already, as shown

Versions from our new toolchain (and an error from gdb we must fix)

Versions from our new toolchain (and an error from gdb we must fix)

If you’re getting the error about ncurses from gdb, this because newer gdb builds include the curses “tui” interface to gdb in the standard build.  (Yay! this is a good thing!)  However, as the g-a-e toolchains are all provided as 32bit, you may be missing the 32bit ncurses lib on your system.  On Fedora, this is provided in the ncurses-libs.i686 package.

Ok, now time to build something.  This is your problem, I’m going to use one of the libopencm3 test programs right now, specifically, https://github.com/libopencm3/libopencm3/tree/master/tests/gadget-zero (the stm32l1 version)

Programming your device

Netbeans doesn’t really have any great way of doing this that I know of.  You can use build configurations to have “run” run something else, which works, but it’s a little fiddly.  I should spend more time on that though.  (See later for a way of doing it iteratively via the debugger console in netbeans)

In the meantime, if you just want to straight out program your device:

$ openocd -f board/stm32ldiscovery.cfg -c "program usb-gadget0-stm32l1-generic.elf verify reset exit"

No need for bin files or anything, there’s a time and a place for those, and if you don’t know and can explain why you need bin files, then elf files are just better in every way.

Debugging your part

GDB is always going to be a big part of this, but, assuming you’ve got it flashed, either by programming as above, then you can debug in netbeans directly.  First, make sure OpenOCD is running again, and just leave it running.


$ openocd -f board/stm32ldiscovery.cfg

First, install the gdbserver plugin, then choose Debug->Attach Debugger from the menu.

Attach to a gdbserver

Attach to a gdbserver

  • Make sure that you have “gdbserver” as the debugger type.  (Plugin must be installed)
  • Make sure that you have “ext :3333” for the target.  By default it will show “remote host:port” but we want (need) to use “extended-remote” to be able to restart the process. (See the GDB manual for more details)
  • Make sure that you have the right project selected.  There’s a bug in the gdbserver plugin that always forgets this.

At this point, “nothing” will happen.  If you look at the console where OpenOCD is running, you’ll see that a connection was received, but that’s it.
Press the “Pause” button, and you will stop execution wherever the device happened to be, and netbeans will jump to the line of code. In this example, it’s blocked trying to turn on HSE, as there’s no HSE fitted on this board:

Source debugging in netbeans via OpenOCD

Source debugging in netbeans via OpenOCD

If you now set a breakpoint on “main” or anywhere early and press the “Restart” icon in the top, OpenOCD will restart your process from the top and stop at the first breakpoint. Yay!  If restart fails, make sure you used “extended-remote” for the target!

Bonus

If you click the “Debugger console” window, you can actually flash your code here too.  Leave the debugger running!  (No need to stop the debugger to rebuild) Make a change to your code, rebuild it, and then, on the “Debugger console” just enter “load” and press enter.  You’ll see the OpenOCD output as it reflashes your device.  As soon as that’s done, just hit the “Restart” icon in netbeans and start debugging your new code!

Installing Eagle 7.3 on fedora 21 x64

So, as before, you need to symlink libssl and libcrypto, but this time, you need to do the symlinking in the normal 64bit lib path.

/usr/lib64$ sudo ln -s libssl.so.1.0.1k libssl.so.1.0.0
/usr/lib64$ sudo ln -s libcrypto.so.1.0.1k libcrypto.so.1.0.0

This is despite the eagle installer saying that it needs the 32bit libraries installed!

Using SWO/SWV streaming data with STLink under linux – Part 2

In Part 1, we set up some (very) basic code that writes out data via Stimulus Channel 0 of the ITM to be streamed otu over SWO/SWV, but we used the existing ST provided windows tool, “STLink” to be view the stream. Now let’s do it in linux.

OpenOCD has some very draft support for collecting this data, but it’s very rough around the edges. [1]

I wrote a tool based on my own decoding of USB traffic to be a little more flexible. You connect to the STLink hardware, and can start/stop logging, change trace files, and change which stimulus ports are enabled. It is quite rough, but functional. It should not be underestimated how important being able to start/stop tracing is. In the ARM debug docs, turning on or reconfiguring trace is undefined as far as having the output bitstream be properly synced. (Section D4.4 of “Flush of trace data at the end of operation” in the Coresight Architecture spec, and most importantly, “C1.10.4 Asynchronous Clock Prescaler Register, TPIU_ACPR” in the ARMv7M architecture reference manual)

Don’t get me wrong, although my tool works substantially better than OpenOCD does, it’s still very rough around the edges. Just for starters, you don’t have debug or flash at the same time! Having it integrated well into OpenOCD (or pyOCD?) is definitely the desired end goal here.

Oh yeah, and if your cpu clock isn’t 24MHz, like the example code from Part 1, then you must edit DEFAULT_CPU_HZ in the top of hack.py!

So, how do you use it?

First, get the source from github: https://github.com/karlp/swopy. You need pyusb 1.x, then run it, and type connect

karlp@tera:~/src/swopy (master)$ python hack.py 
:( lame py required :(
(Cmd) connect
STLINK v2 JTAG v14 API v2 SWIM v0, VID 0x483 PID 0x3748
DEBUG:root:Get mode returned: 1
DEBUG:root:CUrrent saved mode is 1
DEBUG:root:Ignoring mode we don't know how to leave/or need to leave
(1682, 2053)
('Voltage: ', 2.9293697978596906)
DEBUG:root:enter debug state returned: array('B', [128, 0])
('status returned', array('B', [128, 0]))
('status is: ', 'RUNNING')
(Cmd) 

Yes, there’s lots of debug. This is not for small children. You have been warned, but there is some help!

(Cmd) help

Documented commands (type help ):
========================================
connect     raw_read_mem32   run       swo_read_raw
magic_sync  raw_write_mem32  swo_file  swo_start   

Undocumented commands:
======================
EOF          exit  leave_state  raw_read_debug_reg   swo_stop
enter_debug  help  mode         raw_write_debug_reg  version 

(Cmd) 

The commands of interest are swo_file, swo_start and swo_stop. So, enter a file name, and start it up…

(Cmd) swo_file blog.bin
(Cmd) swo_start 0xff
INFO:root:Enabling trace for stimbits 0xff (0b11111111)
DEBUG:root:READ DEBUG: 0xe000edf0 ==> 16842752 (0x1010000) status=0x80, unknown=0x0
DEBUG:root:WRITE DEBUG 0xe000edfc ==> 16777216 (0x1000000) (res=array('B', [128, 0]))
DEBUG:root:READMEM32 0xe0042004/4 returned: ['0x0']
DEBUG:root:WRITEMEM32 0xe0042004/4 ==> ['0x27']
DEBUG:root:WRITEMEM32 0xe0040004/4 ==> ['0x1']
DEBUG:root:WRITEMEM32 0xe0040010/4 ==> ['0xb']
DEBUG:root:STOP TRACE
DEBUG:root:START TRACE (buffer= 4096, hz= 2000000)
DEBUG:root:WRITEMEM32 0xe00400f0/4 ==> ['0x2']
DEBUG:root:WRITEMEM32 0xe0040304/4 ==> ['0x0']
DEBUG:root:WRITEMEM32 0xe0000fb0/4 ==> ['0xc5acce55']
DEBUG:root:WRITEMEM32 0xe0000e80/4 ==> ['0x10005']
DEBUG:root:WRITEMEM32 0xe0000e00/4 ==> ['0xff']
DEBUG:root:WRITEMEM32 0xe0000e40/4 ==> ['0xff']
DEBUG:root:READMEM32 0xe0001000/4 returned: ['0x40000000']
DEBUG:root:WRITEMEM32 0xe0001000/4 ==> ['0x40000400']
DEBUG:root:READMEM32 0xe000edf0/4 returned: ['0x1010000']
DCB_DHCSR == 0x1010000
(Cmd) rDEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
unDEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 16 bytes of trace buffer
DEBUG:root:Wrote 16 trace bytes to file: blog.bin
DEBUG:root:reading 18 bytes of trace buffer
DEBUG:root:Wrote 18 trace bytes to file: blog.bin

Ok, great. but… where’d it go? Well. It’s in the native binary ARM CoreSight trace format, like so…

karlp@tera:~/src/swopy (master *)$ tail -f blog.bin | hexdump -C 
00000000  01 54 01 49 01 43 01 4b  01 20 01 37 01 31 01 38  |.T.I.C.K. .7.1.8|
00000010  01 0d 01 0a 01 54 01 49  01 43 01 4b 01 20 01 37  |.....T.I.C.K. .7|
00000020  01 31 01 39 01 0d 01 0a  01 54 01 49 01 43 01 4b  |.1.9.....T.I.C.K|
00000030  01 20 01 37 01 32 01 30  01 0d 01 0a 01 54 01 49  |. .7.2.0.....T.I|
00000040  01 43 01 4b 01 20 01 37  01 32 01 31 01 0d 01 0a  |.C.K. .7.2.1....|
00000050  01 54 01 49 01 43 01 4b  01 20 01 37 01 32 01 32  |.T.I.C.K. .7.2.2|
00000060  01 0d 01 0a 01 54 01 49  01 43 01 4b 01 20 01 37  |.....T.I.C.K. .7|
00000070  01 32 01 33 01 0d 01 0a  01 54 01 49 01 43 01 4b  |.2.3.....T.I.C.K|
00000080  01 20 01 37 01 32 01 34  01 0d 01 0a 01 54 01 49  |. .7.2.4.....T.I|
00000090  01 43 01 4b 01 20 01 37  01 32 01 35 01 0d 01 0a  |.C.K. .7.2.5....|
000000a0  01 54 01 49 01 43 01 4b  01 20 01 37 01 32 01 36  |.T.I.C.K. .7.2.6|
000000b0  01 0d 01 0a 01 54 01 49  01 43 01 4b 01 20 01 37  |.....T.I.C.K. .7|
000000c0  01 32 01 37 01 0d 01 0a  01 54 01 49 01 43 01 4b  |.2.7.....T.I.C.K|
000000d0  01 20 01 37 01 32 01 38  01 0d 01 0a 01 50 01 75  |. .7.2.8.....P.u|
000000e0  01 73 01 68 01 65 01 64  01 20 01 64 01 6f 01 77  |.s.h.e.d. .d.o.w|
000000f0  01 6e 01 21 01 0d 01 0a  01 54 01 49 01 43 01 4b  |.n.!.....T.I.C.K|
00000100  01 20 01 37 01 32 01 39  01 0d 01 0a 01 54 01 49  |. .7.2.9.....T.I|
00000110  01 43 01 4b 01 20 01 37  01 33 01 30 01 0d 01 0a  |.C.K. .7.3.0....|
00000120  01 68 01 65 01 6c 01 64  01 3a 01 20 01 32 01 34  |.h.e.l.d.:. .2.4|
00000130  01 35 01 32 01 20 01 6d  01 73 01 0d 01 0a 01 54  |.5.2. .m.s.....T|
00000140  01 49 01 43 01 4b 01 20  01 37 01 33 01 31 01 0d  |.I.C.K. .7.3.1..|
00000150  01 0a 01 54 01 49 01 43  01 4b 01 20 01 37 01 33  |...T.I.C.K. .7.3|

Which is ugly, but you get the idea.

This is where swodecoder.py comes in. The author of the original SWO support in OpenOCD has some code to do this too, it’s more forgiving of decoding, but more likely to make mistakes. Mine is somewhat strict on the decoding, but probably still has some bugs.

Usage is pretty simple

$ python swodecoder.py blog.bin -f
Jumping to the near the end
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 73
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 67
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 75
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 32
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 50
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 49
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 53
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 51
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 13
Not in sync: invalid byte for sync frame: 1
Not in sync: invalid byte for sync frame: 10
TICK 2154
TICK 2155
TICK 2156
TICK 2157
---snip---
TICK 2194
TICK 2195
TICK 2196
Pushed down!
held: 301 ms
Pushed down!
TICK 2197
held: 340 ms
TICK 2198
TICK 2199
Pushed down!
TICK 2200
TICK 2201
---etc---

You (hopefully) get the idea. When the writes to the stimulus ports are 8bit, swodecoder.py simply prints it to the screen. So here we have a linux implementation of the SWV viewer from the windows STLink tool. It’s got a lot of debug, and a few steps, but the pieces are all here for you to go further.

In part three, we’ll go a bit further with this, and demonstrate how SWO lets you interleave multiple streams of data, and demux it on the host side. That’s where it starts getting fun. (Hint, look at the other arguments of swodecoder.py and make 16/32bit writes to the stimulus registers)

To stop SWO capture, type “swo_stop” and press enter, or just ctrl-d, to stop trace and exit the tool.

[1] Most importantly, you can not stop/start the collection, you can only set a single file at config time, which isn’t very helpful for running a long demon. Perhaps even worse, OpenOCD is hardcoded to only enable stimulus port 0, which is a bit restrictive when you can have 32 of them, and being able to turn them on and off is one of the nice things.

Using SWO/SWV streaming data with STLink under linux – Part 1

This is part 1 in a short series about using the SWO/SWV features of ARM Cortex-M3 devices [1]
This post will not explain what SWO/SWV is, (but trust me, it’s cool, and you might work it out by the end of this post anyway) but will focus on how to use it.

First, so you have a little idea of where we’re going, let’s start at the end…

enum { STIMULUS_PRINTF }; // We'll have more one day
 
static void trace_send_blocking8(int stimulus_port, char c) {
        while (!(ITM_STIM8(stimulus_port) & ITM_STIM_FIFOREADY))
                ;
        ITM_STIM8(stimulus_port) = c;
}
 
int _write(int file, char *ptr, int len)
{
        int i;
        if (file == STDOUT_FILENO || file == STDERR_FILENO) {
                for (i = 0; i < len; i++) {
                        if (ptr[i] == '\n') {
                                trace_send_blocking8(STIMULUS_PRINTF, '\r');
                        }
                        trace_send_blocking8(STIMULUS_PRINTF, ptr[i]);
                }
                return i;
        }
        errno = EIO;
        return -1;
}

You can get this code from either:

  • My github repository
  • The swo-1-printf directory in swo-stlink-linux-1

    That’s all[2] you need to have printf redirected to an ITM stimulus port. It’s virtually free, doing nothing if you don’t have debugger connected. [3]

    Groovy. If you have the Windows STLink Utility, you can use this right now. Enter the correct clock speed of your main app, and choose stimulus 0 (or all) and watch your lovely console output.

    stlink-windows-swv

    Ok, that’s cool, but weren’t we going to do this in linux? We were, and we will, but let’s stop here with a good working base, so we can focus on just the extra stuff later.

    [1] Cortex M4 too, but not M0, that’s another day altogether. Specifically, STM32L1 parts, but the concepts and code are the same
    [2] Expects you have your general makefiles all set up to do “the right thing” for newlib stubs and so on.
    [3] Except for generating the formatted string of course, that’s not free. And it does take a _little_ bit of time to write the characters out without overflowing, but that’s a story for another day.

Decoding Vendor Specific USB protocols with Wireshark lua plugins

Earlier this week I was doing some reverse engineering and confirmation of behaviour for a USB tool. I got wireshark to sniff the traffic, (Not going into that here, it’s relatively straightforward and documented enough on the web to get by) but as it’s a vendor specific protocol, it was just lots of bytes.

I was decoding them by hand, and then copying and pasting into a python script (I had pretty good sources of what all the bytes meant, I just had no good way of visualising the stream, which is where wireshark and this post comes in)

I have written a custom wireshark dissector before, but I wasn’t super happy with the mechanism. I have been doing some work with Lua in my dayjob, and had read that wireshark supported dissector plugins could be written in Lua now. Seemed like a better/easier/more flexible approach.

I got to work, following these (somewhat) helpful resources:

These were all generally very helpful, but there two things I wrestled with, that I didn’t feel were at all well described. From here on out, I’m going to assume that you know what’s going on, and just need help with things that are not covered in any of the earlier links.

Little Endian

You declare ProtoField’s as just uint8, uint16, uint32. This is fine and sane. But there’s a few ways of working with it when you add it to the tree.

f.f_myfield = ProtoField.uint32("myproto.myfield", "My Field", base.HEX)
-- snip --
mytree:add_le(f.f_myfield, buffer(offset, 4)

This way works very well, selecting “myfield” in the packet view correctly highlights the relevant bytes. But say you want to get the value, to add to the info column for instance, you might do this.. (if you read the api guide well enough)

local val = buffer(offset, 4):le_uint()
-- normal add, we've done the LE conversion
-- if you don't do :le_uint() above, and do a :add_le() below,
-- the info column will show the backwards endian value!
mytree:add(f.f_myfield, val)
pinfo.cols["info"]:append(string.format(" magicfield=%d", val))

At first glance, this works too. The Info Column shows the right value, the tree view shows the right value. BUT clicking on the tree view doesn’t highlight the bytes.

Here’s how to do it properly:

local val = buffer(offset, 4)
mytree:add_le()(f.f_myfield, val)
pinfo.cols["info"]:append(string.format(" magicfield=%d", val:le_uint()))

Ok, a little fiddly, but you would probably get there in the end.

Reading existing USB properties

The docs talk about doing something like:

local f_something = Field.new("tcp.port")

Except, I didn’t find anywhere that described what magic strings were available. I tried using the values available in the display filter box, like, “bEndpointAddress” but never got anywhere. One of the samples led me to this tidbit:

       local fields = { all_field_infos() }
       for ix, finfo in ipairs(fields) do
            print(string.format("ix=%d, finfo.name = %s, finfo.value=%s", ix, finfo.name, getstring(finfo)))
        end

When you click on a packet, this will dump lots to the console, and you can hopefully work out the magic values you need!

Synchronising packets

TCP streams are easy, you have sequence ids to correlate things. USB isn’t quite the same. You can see the “URB” has two frames, from host to device and device to host, which are in sync, but for the very common case of writing to an OUT endpoint, and getting a response on an IN endpoint, you don’t get any magical help.

I found a way of doing this, but it’s not ideal, and tends to mess up the display in wireshark if you click on packets in reverse order. This is because I just set a state variable in the dissector when I parsed the OUT packet, and check it when I parse the IN packet. It works, but it was less that ideal. Sometimes you need to click forwards through packets again. Sometimes the tree view would show the right values too, but the info column would be busted. Probably doing something wrong somewhere, but hard to know what.

Actually hooking it up

Finally, and the most frustrating, was how to actually hook it up! The demos, all being TCP related, just do:

the_table = DissectorTable.get("udp.port")
the_table:add(9999, my_protocol)

Ok, well and good, but how on earth do I register a USB dissector? You need to register it by the class which is sort of ok, but good luck having dissectors for multiple vendor specific classes. I didn’t see a way of adding a dissector based on the VID:PID, though I think that would be very useful.

usb_table = DissectorTable.get("usb.bulk")
-- this is the vendor specific class, which is how the usb.bulk table is arranged.
usb_table:add(0xff, my_proto)
-- this is the unknown class, which is what shows up with some usb tools?!
usb_table:add(0xffff, my_proto)

Update 2013-Jan-16One of the sigrok developers got the vid/pid matching working, in another example of a lua plugin.

This skims over a lot, but it should help.
Final working code:

I’ve totally glossed over exactly _what_ or _why_ I was decoding vendor specific usb classes. That’s a topic for another day :)

Here’s some pictures though

How it looks before you write a plugin.

How it looks before you write a plugin.

With the plugin, note that the info column isn't always showing the proper values.  No idea why.  wireshark's weird

With the plugin, note that the info column isn’t always showing the proper values. No idea why. wireshark’s weird

PS: How about that title! linkbait to the max! keywords to the rescue! (or at least, the ones I had tried searching for)

Installing Eagle 6.5 on Fedora 18

I’ve spoken about this before, but here’s how to install current Eagle 6.5.0 on Fedora 18 64bit.

As before, this may be incomplete for a fresh install, ie, I’ve already installed all the 32bit compat libs. But Eagle expects to find OpenSSL 1.0.0. No other version at all.

You need openssl-libs.i686 to be installed, but that will be 1.0.1e, and none of the existing symlinks are suitable.

$ sh ./eagle-lin-6.5.0.run
/tmp/eagle-setup.11607/eagle-6.5.0/bin/eagle: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

However, you can just symlink it a bit more…

/usr/lib $ sudo ln -s libssl.so.1.0.1e libssl.so.1.0.0
/usr/lib $ sudo ln -s libcrypto.so.1.0.1e libcrypto.so.1.0.0

Installing Eagle 5.12 on Fedora 19 64bit

Earlier I wrote about installing Eagle 5.12 on Fedora 17 64 bit, but on a new computer, I have an updated OS, so here’s a revised list.

This should be the minimal steps to install. (You probably need glibc.i686 as well, but it _should_ get pulled in by these other libs)

  • libXrender.i686
  • libXrandr.i686
  • libXcursor.i686
  • freetype.i686
  • fontconfig.i686
  • libXi.i686
  • libpng12.i686 (Changed names since Fedora 17)
  • libjpeg-turbo.i686

libstdc++.i686 might have been necessary as well, but I had quite possibly already installed it.

Bash tab completion for python scripts

Bash completion is normally pretty awesome. You can even tab complete hostnames to ssh, if you’ve ssh’d to them before. It can complete arguments to all sorts of commands, and there’s even a python module that will let you provide completion for your custom command.

And this is all handled by plugins, so if you are executing something new/different, you get plain old regular completion of files and directories. This mostly all works, but I ran into a problem with a python script I’d written the other day, and I have no solution at all, only a workaround.

In all cases I expect to get full path plus file completion and prompting as my command is “unknown”

Command

Outcome

asdfasdf fileprefix[tab]

Works perfectly

python my-script.py fileprefix[tab]

Works perfectly

python my-script.py --my-argument fileprefix[tab]

only completes directories!

chmod +x myscript.py
./my-script-py --my-argument fileprefx[tab]

Works perfectly

I can only assume there’s a bug in the python completion module? No idea how to diagnose it further though. Having to make my own scripts executable isn’t the end of the world though, so it works for now. This was with bash 4.25.45(1) on Fedora 18, and bash-completion version 2.1, fwiw