Monday, October 5, 2020

LGT8F328P EDMINI board


Earlier this year I purchased a EDMINI board from Electrodragon.  It uses a LGT8F328P chip, which supports the AVR instruction set.  The instruction set timings and peripheral registers vary slightly from the ATmega328P, so it is not 99% compatible as claimed by Electrodragon.  I bought one to see just how compatible it is, and possibly to port some of my AVR libraries to the LGT MCU.

The module arrived in an anti-static bag, inside a padded envelope.  After connecting 5V power to the board, the D13 LED blinked on and off every second, suggesting that it comes with the Arduino blink sketch pre-loaded.  I then hooked up a USB-TTL adapter, installed the LGT board file in the Arduino IDE, and tried flashing a modified blink sketch to the board.  The upload failed, and after some debugging I found that the reset was not working on the MCU.  Neither pressing and holding the reset button nor grounding RST would reset the board.  After contacting Electrodragon, Chao agreed replace the board, with two new boards.  He told me that they see a higher than average failure rate with the LGT8F328P chips.

In addition to Chao's frank comment about reliability, another concern I had about the LGT parts was the lack of markings on the chip.  I suspect LGT sells the parts without markings so vendors can label them with their own brand.  This also makes it easier for more nefarious manufacturers to label them as an ATmega328p.  

When the new boards arrived, the first thing I did was make sure the reset button worked.  After pressing reset the LED flashes quickly three times for the bootloader, and then flashes on and off every second.  However when I tried uploading sketch using the Arduino IDE, the upload still failed.  After some more debugging, I found I could upload if I pressed the reset button just before uploading.  This meant the bootloader was working, but auto-reset (toggling the DTR line) was not.  These boards use the same auto-reset circuit as an Arduino Pro Mini:

A negative pulse on DTR will cause a voltage drop on RST, which is supposed to reset the target.  When the target power is 5V and 3V3 TTL signals are used, toggling DTR will cause RST to drop from 5V to about 1.7V (5 - 3.3).  With the ATmega328P and most other AVR MCUs, 2V is low enough to reset the chip.  The LGT8F328P, however requires a lower voltage to reset.  In some situations this can be a good thing, as it means the LGT MCU is less likely to reset due to electromagnetic interference.

The EDMINI board has a 3V3 regulator which can be selected by a solder jumper.  This is mentioned on the Electrodragon site, but it is not clearly documented which pads need to be shorted to switch from 5V to 3V3.  After a bit of debugging I was able to run the board at 3V3, and was able to use the auto-reset feature.

I do most of my AVR development using command line tools, not the Arduino IDE.  I compiled a small program that toggles every pin on PORTB using avr-gcc 5.4.0, and flashed it to the EDMINI board using avrdude.  Nothing happened.  Since the Arduino blink sketch worked, I know that the LED on PB5 was working.  My conclusion is that the LGT Arduino core must do some setup to enable PORTB.  This is common on modern MCUs such as the ARM Cortex, but on AVRs like the ATmega328p, writing 255 to the PORTB and DDRB registers is all it takes to drive every pin on port B high.

I won't be doing any development work with the LGT MCUs.  Although they are cheaper and can run a bit faster than authentic AVR parts, their compatibility is rather limited.  Any code that relies on the standard AVR instruction set timing, such as my picoUART library, will not work.  The 8F328P cannot be programed with a USBasp, as the native programming interface is SWD, not Atmel's SPI-based protocol.  For a cheap and powerful MCU, the CH551 looks much more interesting.

Thursday, September 17, 2020

Recording the Reset Pin

 


The AVR reset pin has many functions.  In addition to being used as an external reset signal, it can be used for debugWire, and it is used for SPI and for high-voltage programming. Other than for when it is used as an external reset signal, the datasheet specifications are somewhat ambiguous.  I recently started working on an updated firmware for the USBasp, and wanted to find out more details about the SPI programming mode.  The image above is one of many recordings I made from programming tests of AVR MCUs.

When I first started capturing the programming signals, I observed seemingly random patterns on the MISO line before programming was enabled.  Although the datasheet lists the target MISO line as being an output, it only switches to output mode after the first two bytes of the "Programming Enable" instruction, 0xAC 0x53, are received and recognized.  Prior to that the pin floats, and the seemingly random patterns I observed were caused by the signals on the MOSI and SCK lines inducing a voltage on the MISO line.  I enabled the pullup resistor on the programmer side in order to keep the MISO line high until the PE instruction was recognized by the target.

One of the steps in the datasheet's serial programming alorithm that doesn't make sense to me is step 2, which says, "Wait for at least 20 ms and enable Serial Programming by sending the Programming Enable serial instruction to pin MOSI."  It's clear from the capture image above that a wait time of less than 100 us worked in this case.  I did a number of experiments with different targets (t13, t85, m8a) with and without the CKDIV8 fuse set, and found a delay of 64 us was always sufficient.  Nevertheless, I still used a 20 ms delay in the USBasp firmware.

Another observation I made was of a repeatable delay between the 8th rising edge of the SCK signal on the second byte and MISO going low.  After multiple tests, I found that delay is between 2 and 3 of the target clock cyles.  A close-up of the 0x53 byte shows this clearly:


The 2-3 clock ccyle delay seems to correspond with the datasheet's specification of the minimum low and high periods for the SCK signal of 2 clock cycles when the target is running at less than 12Mhz.  However I found I couldn't consistently get a target running at 8MHz to enter programming mode with a SCK clock of 1.5MHz.  Additional logs of the programming sequence revealed something interesting when multiple PE instructions are sent at less than 1/8th of the target clock rate, with a positive pulse on RST for synchronization.  In those sequences, the delay was smaller between the 8th rising edge of the SCK signal on the second byte and MISO going low for the second and subsequent times the PE instruction is sent.  It seems you need to use a slower SCK frequency to get the target into programming mode, but after that, the frequency can be increased to 1/4 of the target clock.

Using what I learned, I have implemented automatic SCK speed negotiation and a higher default SCK clock speed.  The speed negotiation starts with 1.5MHz for SCK, and makes 3 attempts to enter programming mode.  If that fails, the next slower speed (750kHz) is tried three times, and so on until a speed is found where the target responds.  For subsequent communications with the target, the speed is doubled, since the slowest speed is only needed the first time the PE command is received after power-up.  The firmware also supports a maximum SCK frequency of 3MHz, vs 1.5MHz for the original firmware.

The higher speeds don't make a large difference in flash/verify times since the overhead of the vUSB code tends to dominate beyond a SCK frequency of 750kHz or so.  Reading the 8kB of flash on an ATtiny85 takes around 3 seconds.  By optimizing the low-speed USB code, such as was done by Tim with u-wire, it should be possible to double that speed.

Sunday, September 6, 2020

Flashing AVRs at high speed

 

I've written a few bootloaders for AVR MCUs, which necessarily need to modify the flash while running.  The typical 4ms to write or erase a page depends on the speed of the internal RC oscillator.  Here's a quote from section 6.6.1 of the ATtiny88 datasheet:

Note that this oscillator is used to time EEPROM and Flash write accesses, and the write times will be affected accordingly. If the EEPROM or Flash are written, do not calibrate to more than 8.8 MHz. Otherwise, the EEPROM or Flash write may fail.

I wondered how running the RC oscillator well above 8.8MHz would impact erasing and writing flash  In the past I read about tests showing the endurance of AVR flash and EEPROM is many times more than the spec, but I couldn't find any tests done while running the AVR at high speed.  I did come across a post from an old grouch on AVRfreaks warning not to do it, so now I had to try.

The result is a program I called flashabuse, which you'll see later is a bit of a misnomer.  What the program does is set OSCCAL to 255, then repeatedly erase, verify, write, and verify a page of flash.  I chose to test just one page of flash for a couple reasons.  First, testing all 128 pages of flash on an ATtiny88 would take much more time.  The second is that I would only risk damaging one page, and an ATtiny88 with 127 good pages of flash is still useful.

The results were very positive.  My little program was completing about 192 cycles per second, taking 2.6ms for each page erase or page write.  I let it run for an hour and a half, so it successfully completed 1 million cycles.  Not bad considering Atmel's design specification is a minimum of 10,000 cycles.

So why does the flash work fine at high speed?  I think it has to do with how floating-gate flash memory works.  Erasing and writing the flash requires removing and adding a charge to the floating gate using high voltages.  Atmel likely uses timing margins well in excess of the 10% indicated in the datasheet, so even half the typical 4ms is more than enough to ensure error-free operation.  I even think writing at high speed puts less wear on the flash because it exposes the gate to high voltages for a shorter period of time.

Addendum

I received some feedback questioning whether the faster write time may reduce retention due to reduced charge on the floating gate.  As I mentioned above, Atmel likely used a very large timing margin when designing the flash memory.  Chris Lamont, who tested flash retention on a PIC32, stated that retention failure is "extremely unlikely".

The retention specs for the ATtiny88 are, "20 years at 85°C / 100 years at 25°C".  As this Micron technical note (PDF) shows, retention specs are based on models, not actual testing.  Micron's JESD47I PCHTDR testing is done at 125C for 1000 hours, and requires 0 failures.  TEKMOS states, "As a very rough rule of thumb, the data retention time halves for every 10C rise in temperature."  Extrapolating from a 100-year retention at 25C, retention at 255C, a typical reflow soldering peak temperature, would be only 6 minutes.

In an attempt to show that retention is not impacted by repeated fast flashing, I performed two additional tests.  For the first test, I baked the subject MCU for 12 hours at 150C, then performed 100,000 fast write/erase cycles.  Next, 0x55 was written to the test page, and repeatedly verified for 2 hours.  This test passed with no errors.  For the second test, I filled the 8kB of flash with zeros to put a charge on the floating gate for every bit.  I then baked the subject MCU for 12 hours at 150C, then verified that all bits remained at zero.  This test passed with all 65,536 bits reading zero.  I did, however have a failure of one solder joint, likely due to the stress of thermal cycling.

For those who are particularly concerned paranoid about flash retention, one solution is refereshing the flash.  For an AVR MCU, it would be simple to refesh the flash on every bootup with a small segment of code in .init1.  The code would copy each page into the page buffer, then perform a write on the page.  This would refresh all the 0 bits, and extend the retention life for another 20 to 100 years.

Thursday, August 27, 2020

Hacker's Intro to USB hardware

 


Low-speed 1.5Mbps and full-speed 12Mbps USB, while more complicated than a UART, are still hacker-friendly.  As the standard approaches 25 years old, I've decided to document some of the more useful highlights I've learned.

While some USB devices will have accessible PCB pads where you can probe signals, it's best to have some breakouts and pass-thru cables with test points.  I've found broken micro-USB cables to be cheap option.  I cut the micro-b end off, strip the wires, and solder them to some protoboard with 4 pin headers for the ground 5V, D+ and D- connections.  A crude USB voltage tester can be made with a couple silicon diodes and white or blue LED in series, powered by the 5V line.  In the 20mA range, a 1N4148 has a vF of about 0.8V, so a 3.4V LED will be brightly lit if 5V is present.  I've also made a custom USB-A extension cable with a section of the D+ and D- wires exposed for easy attachment of alligator clips.

Although USB power is 5V, typically at up to 500mA, the signalling is 3.3V.  At the host, the data pins are pulled to ground with a resistance between 15k and 22k.  At the device, the D+(full-speed) or D-(low-speed) pin is pulled up to 3.3V to signal to the host that a device is attached.  The spec shows this being done with a 1.5k pullup to 3.6V, which creates a 18.5k/20k divider, resulting in 3.6V * 0.925 or 3.33V. I've found a 10k pullup to 5V works just fine, and many devices use a 1.5k pullup to 3.3V. since the spec requires a minimum of 2.7V for detection to work.  For a connected low-speed device (like a mouse), D+ will be near 0V, and D- will be near 3.3V.  For a full-speed device, the polarity will be reversed.  High-speed devices use low-swing 400mV signalling with both D+ and D- at 0V when idle.

The frequency counter on a multimeter can be used to tell if a device is alive, or if the host has failed to recognize it.  For a device that has been enumerated by a host, the host will send a keepalive signal to the device.  For a low-speed device, this is a single-ended 0 (SE0) where D- is pulled low for 1.3us every ms.  Therefore, a frequency of at least 1kHz will be detected on the D- line.

You can get a USB device to reconnect without unplugging it by forcing a bus reset.  This can be done by shorting the D+(full-speed) or D-(low-speed).  To avoid releasing the magic smoke by accidentally shorting the wrong connection, I suggest using 100-150 ohm resistor, which is still more than sufficient to reset the bus.

Thursday, July 2, 2020

Getting started with the WCH CH551 and CH552

When I first read about the CH554 series of MCUs, I thought it would be interesting to test out some day.  Part of the attraction is that it's based on the 8051, which is a well-documented an widely used architecture.  The first assembly language I learned almost 40 years ago was for the 6502, so learning to program the 8-bit CISC should be relatively easy.

Instead of purchasing the bare chips for pennies at LCSC and putting together a breakout board, I bought a couple modules from Electrodragon.  I had learned that the CH551, CH552, and CH554 all used the same die.  I bought the CH551 and CH552 modules with the intention of eventually trying to hack them into working as a CH554.

For testing the modules, in addition to the CH554 SDK for SDCC on Linux, I've used Ch55xduino on Windows.  One thing not in the Ch55xduino documentation is driver setup.  The windoze version I'm using is 7E, and when I first inserted the CH551 module, I got a driver error.

Using Zadig to set the driver to libusb-win32 solved the problem.

The CH55xduino documenation also lacks pinout documentation for anything other than the reverence board.  To help, I've copied the pinouts from the CH552 datahseet.


The CH55x bootloader supports DFU, which is what the CH55xduino uploader uses the first time code is uploaded to the module.  Once the first sketch is uploaded, the CH55xduino core includes a CDC serial stack.  With my CH551 module no longer appearing as a DFU device, I had to use Zadig again to change the CDC Serial device to use the USB Serial (CDC) driver.  After that, the module appears as a COM port.

With the COM port selected in the Arduino IDE, subsequent uploads enter the bootloader by switching the baud rate to 1200bps.  If no COM port is selected, the upload tool looks for a CH55x device in DFU bootloader mode.  To enter the bootloader, it is necessary to pull the USB D+ pin up to 3.3V when power is applied.  The Electrodragon boards have a pinout for an upload jumper, which when shorted will connect the D+ pin (P3.6/UDP)to 3.3V through a 10k resistor.  On one of my modules I soldered pin headers and use a jumper to force it into upload mode.  On the other, I just used a low-value (270Ohm) through-hole resistor pushed into the holes.

Currently CH55xduino is not optimized for size, with a basic blink sketch requiring 5333 bytes of flash.  Officially, the CH551 is only supposed to have 10kB of available flash, so the CH55xduino overhead means less than 5kB is left for user code.  The CH551 actually seems to have 12kB available for flashing user code, which I think will be plenty if the CH55xduino core gets some optimization work.  Since I like to do low-level embedded coding, I'll be using SDCC from the command line most of the time.  The blink example in the CH554 SDK for SDCC compiles to 700 bytes, and I was able to get that down to 232 bytes after leaving out the UART initialization in debug.c. With a bit more optimization I think I can get the blink example down to 100 bytes or so.

One small surprise I found during my testing is that the Electrodragon CH551 and CH552 modules use different pins for the user LED.  On the CH551, use P3.0, working in open-drain mode so the LED light up when P3.0 is low.  On the CH552, drive P1.4 high to light the LED.  This is documented on the Electrodragon web site, but it is easy to forget when switching between the two modules.

I've already started to learn how to configure the standard MCS-51 UART, and have figured out how to directly manipulate the ports using the SFRs (Special Function Registers).   Once I've mastered how to program these cheap little devices, I'll follow up with another blog post revealing the details.

Thursday, June 11, 2020

A full-duplex tiny AVR software UART

I've written a few software UARTs for AVR MCUs.  All of them have bit-banged the output, using cycle-counted assembler busy loops to time the output of each bit.  The code requires interrupts to be disabled to ensure accurate timing between bits.  This makes it impossible to receive data at the same time as it is being transmitted, and therefore the bit-banged implementations have been half-duplex.  By using the waveform generator of the timer/counter in many AVR MCUs, I've found a way to implement a full-duplex UART, which can simultaneously send and receive at up to 115kbps when the MCU is clocked at 8Mhz.

I expect most AVR developers are familiar with using PWM, where the output pin is toggled at a given duty cycle, independent of the code execution.  The technique behind my full-duplex UART is using the waveform generation mode so the timer/counter hardware sets the OC0A pin at the appropriate time for each bit to be transmitted.  TIM0_COMPA interrupt runs after each bit is output.  The ISR determines if the next bit is a 0 or a 1.  For a 1 bit, TCCR0A is configured to set OC0A on compare match.  For a 0 bit, TCCR0A is configured to clear OC0A on compare match.  The ISR also updates OCR0A with the appropriate timer count for the next bit.  To allow for simultaneous receiving, the TIM0_COMPA transmit ISR is made interruptible (the first instruction is "sei").

The receiving is handled by PCINT0, which triggers on the received start bit, and TIM0_COMPB interrupt which runs for each received bit.  I wrote this ISR in assembler in order to ensure the received bit is read at the correct time, taking into consideration interrupt latency.  If any other interrupts are enabled, they must be interruptible (ISR_NOBLOCK if written in C).  I've implemented a two-level receive FIFO, which can be queried with the rx_data_ready() function.  A byte can be read from the FIFO with rx_read().

The code is written to work with the ATtiny13, ATtiny85, and ATtiny84.  Only PCINT0 is supported, which on the t84 means that the receive pin must be on PORTA.  With a few modifications to the code, PCINT1 could be used for receiving on PORTB with the t84.  The total time required for both the transmit and the receive ISRs is 52 cycles.  Adding an average interrupt overhead of 7 cycles for each ISR means that there must be at least 66 cycles between bits.  At 8Mhz this means the maximum baud rate is 8,000,000/66 = 121kbps.  The lowest standard baud rate that can be used with an 8Mhz clock is 9600bps.

The wgmuart application implements an example echo program running at the default baud rate of 57.6kbps.  In addition to echoing back each character received, it prints out a period '.' every second along with toggling an LED.


I've published the code on github.

Monday, April 27, 2020

Measuring AVR interrupt latency

One thing I like about AVR MCUs is that their datasheets are relatively short and simple.  It's also one of the things I don't like, because the datasheets often lack important details.  Understanding external interrupt latency is one things that is lacking complete and clear details.  I decided to investigate the interrupt latency of the ATtiny13 and the ATtiny85.  The datasheet's description of interrupt response time and external interrupts is identical for both parts.

Interrupt Response Time

The ATtiny13 datasheet section 4.7.1, under the heading "Interrupt Response Time", says, "The interrupt execution response for all the enabled AVR interrupts is four clock cycles minimum. After four clock cycles the Program Vector address for the actual interrupt handling routine is executed. [...] The vector is normally a jump to the interrupt routine, and this jump takes three clock cycles. [...] If an interrupt occurs when the MCU is in sleep mode, the interrupt execution response time is increased by four clock cycles."

While section 4.7.1 is reasonably detailed, it has one significant error, and another important omission.  The error is the sentence, "The vector is normally a jump to the interrupt routine, and this jump takes three clock cycles".  All AVRs with less than 8KB of flash, like the ATtiny13, have no jump instruction.  They only have a relative jump "rjmp", which takes two clock cycles.  This is obviously a copy/paste error from the datasheet of an AVR with  more than 8KB of flash.  Anyone familiar with the AVR instruction set would likely catch this simple error. The omission from section 4.7.1 is much harder to recognize until you carefully examine section 9.2 and figure 9-1 in the datasheet.

Figure 9-1 shows a circuit which appears to add a latency of two clock cycles to pin change interrupts.  There is no written description for the circuit, and the external interrupt details in section 9.2 of the datasheet state, "Pin change interrupts on PCINT[5:0] are detected asynchronously."  Since pin change interrupts can be used to wake the part from power-down sleep mode when all clocks are disabled, they must be detected asynchronously during power-down sleep.  To determine when they are detected synchronously requires testing.

To test the interrupt latency I wrote a program in assembler that can generate low pulses of different lengths using PWM.  I chose not to write the program in C because I want to be able to measure the interrupt latency down to a single cycle.  On the t13, PB1 is the pin for INT0, PCINT1, and  OC0B.  By using OC0B to generate a low pulse on PB1, I'll be able to trigger INT0 and PCINT1 without any external connections.  When the interrupt is triggered, it should take four cycles to execute the code at the interrupt vector.  That code is an rjmp to the interrupt function, and that rjmp takes two additional clock cycles.  For the best-case latency, the first instruction in the interrupt function will execute six cycles after the interrupt is triggered.

The first instruction of the interrupt function checks the state of the pin that triggered the interrupt (the "sbic" instruction).  If the pin is low, it skips the next instruction, then goes into an infinite loop.  If the pin is high, it toggles the LED pin.  Since the PWM is configured to generate a low pulse, if the pulse has ended before the sbic, the LED will light up to indicate the interrupt response time was too slow.  The length of the pulse is one cycle longer than the value stored in OCR0B, which is done at lines 28 and 29.  My testing consisted mainly of modifying the OCR0B value, then building and flashing the modified code to the AVR.

Results

As expected INT0 latency is 4 clock cycles from the end of the currently executing instruction.  This means that if the interrupt occurs during the first cycle of a call instruction which takes 3 cycles, the interrupt response time will be 6 cycles.  For pin change interrupts, the latency is 6 cycles, indicating the synchronizer circuit adds 2 cycles of latency.  In idle sleep mode, both INT0 and PCINT latency is 8 cycles, indicating pin change interrupts operate asynchronously when the CPU clock is not running.