Friday, December 26, 2014
Cheap ODBII bluetooth readers
A couple months ago I bought a ODBII bluetooth reader off Aliexpress for $5. The product photos showed a mini reader (white), but what I received was a larger black one. The size wasn't a problem, but the indicator LEDs face down when plugged into a Hyundai Elantra. The glow of the power light is still visible, so it is still possible to tell it is powered up.
Along with the module, I received a CD with PC software. Instead of using a laptop, I decided to use Torque Lite on my android phone. It was a quick install, and after I paired with the ODB interface (code 1234), Torque Light automatically detected it. If the banner ads in the app bother you, just turn off wifi to stop them from appearing. Torque Light lets you modify the display, positioning dials and digital gauges as desired.
Torque lets you view and clear fault codes, and as seen in the screen shot above, will show things like acceleration if you phone has an accelerometer. You can also display speed from your phone's GPS and compare it to the speed reported over ODBII. The voltage reading might be off slightly; I read 14.42V with my multi-meter vs the 14.5V reported over ODBII.
If you do any amount of work with cars, $5 for a ODBII reader is well worth the money.
Friday, December 5, 2014
Reconditioning NiCd batteries
NiCd batteries are still commonly used in power tools. When compared to lithium-ion batteries, cost isn't the only reason to prefer NiCd. A common problem with nickle-based batteries is a drastically reduced number of cycles when they are not properly maintained.
Since I first got my cordless drill (a 14.4V Makita) many years ago, I would keep one battery pack in the drill and one in the charger. This way I'd always have a fully-charged battery ready to go when I needed it. Although I'd only use my drill a couple times a month, they'd loose most of their capacity after a few years and need replacing. Instead of the 500-1000 cycles they're supposed to get, I was getting less than 100 cycles out of them.
My first searches of ways to recondition batteries lead me to blogs and youtube videos of people zapping batteries with high voltage, even using DC welders. While this might temporarily remove dendrites, it will not restore them to anywhere near their original capacity.
Earlier this year, I found an article on Battery University - How to Restore Nickel-based Batteries. It explains in great detail how Cadex battery analyzers work, and how they significantly improve the number of cycles obtained from battery packs. However Cadex battery analyzers cost hundreds to thousands of dollars, too much to spend compared to less than a hundred dollars every few years for a new set of batteries.
The first thing I decided to do is stop keeping a battery on the charger at all times. NiCd batteries are best stored with a minimal charge. This would help get more cycles out of new batteries, but it wouldn't recondition my more than year old batteries that were giving me less than half their new capacity.
To exercise the battery packs, I needed to discharge them at 1C until they reached 1V/cell. For my 14.4V 1.3Ah packs, this means a 1.3A discharge to 12V. When my packs were discharged, they still had an open-circuit voltage of around 15.5V - a sign of high internal resistance. A 15 to 20 Ohm, 20W power resistor would be ideal, but I didn't have one on hand. Rather than order one and wait for it to arrive, I searched through my junk and found an old car speaker with a DC resistance of 13 Ohms; close enough to what I needed. With a couple of alligator clips I had made a virtually free battery exerciser.
To recondition, I needed to do a "slow discharge" to 0.4V/cell, without a clear definition of "slow discharge". I think around 0.05C should count as a slow discharge, so a 1W 220 Ohm resistor would be ideal. I have a red LED with a 1.2K resistor that I use for breadboard projects, so I decided to use that. I'm not sure if even slower discharge is tangibly better, but it should be at least as good as 0.05C. And with the LED dimming as the battery pack discharges, I could avoid frequent checking of the voltage with my multi-meter.
The first exercise cycle on each pack took a little more than an hour, and the recondition cycle took 10-12 hours. After one exercise/recondition cycle I didn't notice much of an improvement. Three full exercise and recondition cycles seems to have nearly doubled the battery capacity. Today when I took a dead battery from the drill I tested the open-circuit voltage and it was only 11V. This means the exercise and recondition cycles have significantly reduced the internal resistance, and since the voltage is already below 1V/cell, I don't have to do the exercise cycle before re-charging it.
Saturday, November 15, 2014
SE8R01 2.4Ghz wireless modules
I recently purchased what were advertised as nRF24l01+ modules. The modules don't have any silk screen marking the pinout, so the first problem was figuring it out. After a bit of searching I found the pinout for nRF SMD modules, which is the same as the modules with the 8-pin connector except Vcc and Gnd is swapped:
My initial testing of these modules indicated they did not use Nordic nRF24l01+ chips, or chips that are compatible such as the Beken BK2423 or the SiLabs Si24R1. The most obvious indication that these are not nRF modules iss that the default pipe0 address (register 0A) is 46 20 88 41 70 instead of E7 E7 E7 E7 E7. They also respond to the bank switch command (0x50, 0x53), which is not supported by the Nordic chip. I sent a message to the vendor asking for a data sheet, and while I waited I tried to figure out what chip the modules use.
Besides Beken and SiLabs, Hoperf makes a compatible module called the RFM70. The RFM70 modules have a pinout that starts with Gnd, and they have the same default pipe0 address as the nRF. The only other chip I could find that is similar to the nRF24l01 was the NST LT8900 or LT8901. The datasheets for the NST chips didn't match with the register values from my modules.
Surprisingly, less than 24 hours after messaging the seller, I received an attachment containing a datasheet for the Semitek SE8R01. My initial thought that these would inter-operate with nRF modules simply by changing the pipe0 address was disrupted by the discovery that these modules use a slightly different packet format. Section 7.3 of the datasheet shows a 2-byte guard after the address field, just like bluetooth EDR uses where it switches from GFSK to DPSK.
If these modules use DPSK after the guard byte, then there is no way they will communicate with genuine nRF modules. Another less significant source of incompatibility is that these modules don't have a 250kbps mode; instead it has a 500kbps mode along with the 1 and 2-mbps modes. The 250kbps mode on the nRF24l01+ modules is good for extended range, since it has 9 dBm better sensitivity than 1mbps (-94 vs -85 dBm). The SE8R01 datasheet indicates -86 dBm sensitivity at both 1mbps and 500kbps, so there would seem to be little reason to use the 500kbps mode.
One benefit to these modules is that they have a received power report in register 09, something the nRF modules don't have. Although the datasheet documents the bank switch command, and how the currently active bank is reported in the status register 0E, there is no documentation of the bank 1 registers.
I also tested the modules to see if they will support packet sizes over 32 bytes. The chip seems to accept a payload length of up to 255 bytes, but the read Rx payload command only gives 32 bytes before looping back and repeating the first byte in the payload.
- Vcc (1.8-3.6V)
- Gnd
- CE
- CSN
- SCK
- MOSI
- MISO
- IRQ
My initial testing of these modules indicated they did not use Nordic nRF24l01+ chips, or chips that are compatible such as the Beken BK2423 or the SiLabs Si24R1. The most obvious indication that these are not nRF modules iss that the default pipe0 address (register 0A) is 46 20 88 41 70 instead of E7 E7 E7 E7 E7. They also respond to the bank switch command (0x50, 0x53), which is not supported by the Nordic chip. I sent a message to the vendor asking for a data sheet, and while I waited I tried to figure out what chip the modules use.
Besides Beken and SiLabs, Hoperf makes a compatible module called the RFM70. The RFM70 modules have a pinout that starts with Gnd, and they have the same default pipe0 address as the nRF. The only other chip I could find that is similar to the nRF24l01 was the NST LT8900 or LT8901. The datasheets for the NST chips didn't match with the register values from my modules.
Surprisingly, less than 24 hours after messaging the seller, I received an attachment containing a datasheet for the Semitek SE8R01. My initial thought that these would inter-operate with nRF modules simply by changing the pipe0 address was disrupted by the discovery that these modules use a slightly different packet format. Section 7.3 of the datasheet shows a 2-byte guard after the address field, just like bluetooth EDR uses where it switches from GFSK to DPSK.
If these modules use DPSK after the guard byte, then there is no way they will communicate with genuine nRF modules. Another less significant source of incompatibility is that these modules don't have a 250kbps mode; instead it has a 500kbps mode along with the 1 and 2-mbps modes. The 250kbps mode on the nRF24l01+ modules is good for extended range, since it has 9 dBm better sensitivity than 1mbps (-94 vs -85 dBm). The SE8R01 datasheet indicates -86 dBm sensitivity at both 1mbps and 500kbps, so there would seem to be little reason to use the 500kbps mode.
One benefit to these modules is that they have a received power report in register 09, something the nRF modules don't have. Although the datasheet documents the bank switch command, and how the currently active bank is reported in the status register 0E, there is no documentation of the bank 1 registers.
I also tested the modules to see if they will support packet sizes over 32 bytes. The chip seems to accept a payload length of up to 255 bytes, but the read Rx payload command only gives 32 bytes before looping back and repeating the first byte in the payload.
Power
The SE8R01 supports up to 4 dBm of output power, which should allow for better range than the nRF chip which tops out at 0 dBm. To set the output power to 4 dBm, section 6.5 of the datasheet says to set PA_PWR[3:0] in the RF_SETUP register to 1111. Power consumption for the Semitek chip at 0 dBm output power is 18.5mA - much worse than the 11.3mA consumed by the nRF chip at 0 dBm output. The higher power consumption will make it much harder to power these modules with a CR2032 coin cell, since many cheap coin cells start dropping voltage when current output passes 10mA.
Conclusion
At a price of $6 for 10, the SE8R01 modules are 25% cheaper than nRF24l01+ modules selling for $8 for 10. With nothing more than a couple minor tweaks, they can be controlled using existing nRF24l01 code libraries. They do not inter-operate with nRF modules, and consume significantly more power, and likely have no better range than is available using the nRF 250kbps mode. So while the lower price may make them attractive to a volume manufacturer, as a hacker I prefer to stick with the genuine nRF modules.
Sunday, October 12, 2014
USB DC boost converters
I recently purchased some USB boost converters and some AA battery holders to make 5V portable power sources. The boost converters were $6.45 for 10 from AliExpress store XM Electronic trade, and the battery holders were 16c each at Tayda.
A few of the boost converter modules had a piece cracked off the 4.7uF inductor, but otherwise they were in good order. I trimmed the leads from the battery holder and soldered them to the boost converter input. I slightly bent the USB connector tabs so they fit into the holes on the back of the battery holder, and hot glued the board to the battery holder.
The modules were advertised as "input voltage: 1-5V" and "Output Current: Rated 1A-1.5A (single lithium input)". I put in 2 NiMh cells, and the read LED on the boost module lit up. The input from the batteries was 2.6V and the output voltage with no load was 5.08V. I measured the current at between 2 and 3 mA. If the modules output 1-1.5A with a 3.7V lithium battery for input, then I calculated they should output at least 600mA with 2.4V in from a couple of NiMh AA cells. Other vendors advertise specs of the same modules as, "output current of 500 ~ 600MA with two AA batteries", so my 600mA calculation seems about right.
I started load testing with a 68Ohm resistor, and the output was 5.12V for an output current of 75mA. With a 34Ohm load the output voltage was 4.89V. The USB voltage is supposed to be 5V +- .25V, so getting 600mA output without the voltage dropping below 4.75V was looking unlikely. For the next load test I used an old 15W car speaker with a DC resistance of 13Ohms. With the speaker connected the voltage was only 4.48V, giving an output current of 345mA. At this load, the output voltage from the batteries was 2.4V. Interpolating between the results indicates the modules would output not much more than 200mA before dropping below 4.75V.
The last thing I tried was charging a phone. When I plugged in my wife's iPhone, nothing happened. In order to be identified as a USB charging port, the D+ and D- pins need to be shorted, or for a high-power (over 500mA) charging port they need to have a voltage divider from the 5V power. I checked the pins with a meter, and they were not connected. I then soldered a small jumper to short D+ and D-, and tried plugging in my wife's iPhone again. This time the screen indicated the phone was charging.
Since the modules did not perform as advertised, I messaged the AliExpress seller XM Electronic trade/Allen Lau with the details of my testing, and requested a partial refund. After four days he did not respond so I opened a dispute with AliExpress. Within 12 hours, he rejected the dispute only stating, "There was no evidence of right". In the past I've encountered sellers on AliExpress that have even given full refunds after learning their products don't perform as advertised. With Allen Lau it's the first time I've encountered this kind of "I don't give a shit" attitude.
Although the modules do not perform as advertised, they are good for a 5V power source up to about 250mA. You can also find boost converters with a beefier 47uF inductor, but from testing results I've read online they're not much better; with 2.4V in, the output voltage of the bigger modules drops below 2.75V at around 300mA. For backup power for a mobile phone, one of the mobile power banks using a lithium 18650 cell may be a better idea.
If you're just looking to boost the voltage from a battery for a MCU project, check out Sprites mods.
A few of the boost converter modules had a piece cracked off the 4.7uF inductor, but otherwise they were in good order. I trimmed the leads from the battery holder and soldered them to the boost converter input. I slightly bent the USB connector tabs so they fit into the holes on the back of the battery holder, and hot glued the board to the battery holder.
The modules were advertised as "input voltage: 1-5V" and "Output Current: Rated 1A-1.5A (single lithium input)". I put in 2 NiMh cells, and the read LED on the boost module lit up. The input from the batteries was 2.6V and the output voltage with no load was 5.08V. I measured the current at between 2 and 3 mA. If the modules output 1-1.5A with a 3.7V lithium battery for input, then I calculated they should output at least 600mA with 2.4V in from a couple of NiMh AA cells. Other vendors advertise specs of the same modules as, "output current of 500 ~ 600MA with two AA batteries", so my 600mA calculation seems about right.
I started load testing with a 68Ohm resistor, and the output was 5.12V for an output current of 75mA. With a 34Ohm load the output voltage was 4.89V. The USB voltage is supposed to be 5V +- .25V, so getting 600mA output without the voltage dropping below 4.75V was looking unlikely. For the next load test I used an old 15W car speaker with a DC resistance of 13Ohms. With the speaker connected the voltage was only 4.48V, giving an output current of 345mA. At this load, the output voltage from the batteries was 2.4V. Interpolating between the results indicates the modules would output not much more than 200mA before dropping below 4.75V.
The last thing I tried was charging a phone. When I plugged in my wife's iPhone, nothing happened. In order to be identified as a USB charging port, the D+ and D- pins need to be shorted, or for a high-power (over 500mA) charging port they need to have a voltage divider from the 5V power. I checked the pins with a meter, and they were not connected. I then soldered a small jumper to short D+ and D-, and tried plugging in my wife's iPhone again. This time the screen indicated the phone was charging.
Since the modules did not perform as advertised, I messaged the AliExpress seller XM Electronic trade/Allen Lau with the details of my testing, and requested a partial refund. After four days he did not respond so I opened a dispute with AliExpress. Within 12 hours, he rejected the dispute only stating, "There was no evidence of right". In the past I've encountered sellers on AliExpress that have even given full refunds after learning their products don't perform as advertised. With Allen Lau it's the first time I've encountered this kind of "I don't give a shit" attitude.
Although the modules do not perform as advertised, they are good for a 5V power source up to about 250mA. You can also find boost converters with a beefier 47uF inductor, but from testing results I've read online they're not much better; with 2.4V in, the output voltage of the bigger modules drops below 2.75V at around 300mA. For backup power for a mobile phone, one of the mobile power banks using a lithium 18650 cell may be a better idea.
If you're just looking to boost the voltage from a battery for a MCU project, check out Sprites mods.
Saturday, October 4, 2014
nRF24l01+ reloaded
Since writing my post on nrf24l01 control with 3 ATtiny85 pins, I've realized these modules are quite popular. Of over a dozen posts on my blog, it has the most hits. From the comments in the blog, I can see that despite a lot of online resources about these modules, there's still some not clearly documented problems that people run into. So I'll go into some more detail on how they work, share some of my tips for testing & debugging, and more.
The biggest source of confusion seems to be with holding CE high. Section 6.1.6 of the datasheet specifies a 10us high pulse to empty one level of the TX fifo (normal mode) or hold CE high to empty all levels of the TX fifo. This is borne out in testing - CE can be held high for a transmitting device.
The above is a picture of a $1.50 wireless transmitter node I made. Sorry about the poor focus - I don't have a good macro mode on my camera. It's made with an ATtiny88-au glued and soldered with 30AWG wire to a nRF24l01+ module. The pin arrangement is as follows:
ATtiny88 ------ nRF module
14 (PB2/SS) --- 4 (CSN)
15 (PB3/MO) --- 6 (MOSI)
16 (PB4/MI) --- 7 (MISO)
17 (PB5/SCK) -- 5 (SCK)
29 (PC6/RST) -- 3 (CE)
Connecting reset to CE keeps CE high when the AVR is running, and it also gives me an easy way to program the ATtiny88. By connecting the CE pin to the RST pin of my programmer (a USBasp), the existing pins on the nRF module can be used as a programming header. As long as CSN is not grounded, the nRF will not interfere with the communication on the SPI bus. In my testing it worked with CSN floating, but it would probably be best to tie it high or connect a pullup resistor between CSN and Vcc. A small 0603 15K chip resistor should work nicely for this.
The module is powered by a CR2032 cell in a holder on the back of the module. When I wrote my post about Cheap battery-powered micro-controller projects, I hadn't done much experimenting with coin cells. Although some CR2032 batteries can output 20mA continuous, the no-name cells I bought from DX certainly cannot. I connected a 20uF electrolytic capacitor to provide enough voltage during transmits (around 12mA), and put the module in power-down mode the rest of the time.
While keeping CE high is fine for a device that is only transmitting or only receiving, you might run into a problem if you try to switch between the two. The reason can be found in the state diagram from the datasheet: (contrast enhanced to make the state transitions easier to see)
The state diagram shows there is no way to go directly from Rx to Tx mode or the other way around. The module must go into the standby-I state by setting CE low, or into the power down state by setting PWR_UP=0. With CE held high, the only way to change between Rx and Tx mode is to go through power down mode. Another option that doesn't require a separate pin for CE control is to connect CE to CSN. This would allow you to use some nRF libraries that rely on CE toggling. A potential drawback to this is if you want to keep the module in receive mode while polling the status (#7) register for incoming packets. Each time CSN is brought low to poll the status, it will bring CE low as well, which will cause the module to drop out of Rx mode. It only takes 130uS to return to Rx mode once CE goes high, so this may not be a concern for you.
I also found connecting a LED to the IRQ pin (#8) helpful to see when the Rx or Tx IRQ fired.
Lastly, when testing connectivity, start with enhanced shockburst disabled (EN_AA = 0), and CRC off. Then once you've confirmed connectivity, enable the features you want. If you decide to use CRC, go with 2-byte CRC (EN_CRC and CRC0 in the CONFIG register) since a 1-byte CRC will miss 1 in every 256 bit errors vs 1 in 65,536 for a 2-byte CRC.
0e ff 01
For other registers documented as being a single byte, attempts to read multiple bytes results in the same byte being repeated after the status byte.
There's also an undocumented multi-byte register at address 1e. I found some code online that refers to this register as AGC_CONFIG, which implies it is for automatic gain control. I could not find any documentation on how to use this register. With my modules, the first three bytes of this register defaulted to 6d 66 05. Reading them back after writing all 1s resulted in fd ff 07, so some bits are fixed at 0.
The biggest source of confusion seems to be with holding CE high. Section 6.1.6 of the datasheet specifies a 10us high pulse to empty one level of the TX fifo (normal mode) or hold CE high to empty all levels of the TX fifo. This is borne out in testing - CE can be held high for a transmitting device.
The above is a picture of a $1.50 wireless transmitter node I made. Sorry about the poor focus - I don't have a good macro mode on my camera. It's made with an ATtiny88-au glued and soldered with 30AWG wire to a nRF24l01+ module. The pin arrangement is as follows:
ATtiny88 ------ nRF module
14 (PB2/SS) --- 4 (CSN)
15 (PB3/MO) --- 6 (MOSI)
16 (PB4/MI) --- 7 (MISO)
17 (PB5/SCK) -- 5 (SCK)
29 (PC6/RST) -- 3 (CE)
Connecting reset to CE keeps CE high when the AVR is running, and it also gives me an easy way to program the ATtiny88. By connecting the CE pin to the RST pin of my programmer (a USBasp), the existing pins on the nRF module can be used as a programming header. As long as CSN is not grounded, the nRF will not interfere with the communication on the SPI bus. In my testing it worked with CSN floating, but it would probably be best to tie it high or connect a pullup resistor between CSN and Vcc. A small 0603 15K chip resistor should work nicely for this.
The module is powered by a CR2032 cell in a holder on the back of the module. When I wrote my post about Cheap battery-powered micro-controller projects, I hadn't done much experimenting with coin cells. Although some CR2032 batteries can output 20mA continuous, the no-name cells I bought from DX certainly cannot. I connected a 20uF electrolytic capacitor to provide enough voltage during transmits (around 12mA), and put the module in power-down mode the rest of the time.
While keeping CE high is fine for a device that is only transmitting or only receiving, you might run into a problem if you try to switch between the two. The reason can be found in the state diagram from the datasheet: (contrast enhanced to make the state transitions easier to see)
The state diagram shows there is no way to go directly from Rx to Tx mode or the other way around. The module must go into the standby-I state by setting CE low, or into the power down state by setting PWR_UP=0. With CE held high, the only way to change between Rx and Tx mode is to go through power down mode. Another option that doesn't require a separate pin for CE control is to connect CE to CSN. This would allow you to use some nRF libraries that rely on CE toggling. A potential drawback to this is if you want to keep the module in receive mode while polling the status (#7) register for incoming packets. Each time CSN is brought low to poll the status, it will bring CE low as well, which will cause the module to drop out of Rx mode. It only takes 130uS to return to Rx mode once CE goes high, so this may not be a concern for you.
Debugging
Using a red LED in series with Vcc as my post on nrf24l01 control with 3 ATtiny85 pins makes it possible to quickly see how much power the module is using. When in power down mode with CSN high, no light is visible from the LED due to the very low power use. When powered up in Rx or Tx mode, the LED glows brightly, with 10-15mA of current. By watching the power use I was able to tell that after the Rx fifo is full, the module stops receiving, causing the power consumption to drop. The diagram in section 7.5.2 of the datasheet indicates that will happen if CE is low, but it still happens even with CE high. Once a single packet is read out from the Rx fifo, it starts listening for packets again.I also found connecting a LED to the IRQ pin (#8) helpful to see when the Rx or Tx IRQ fired.
Lastly, when testing connectivity, start with enhanced shockburst disabled (EN_AA = 0), and CRC off. Then once you've confirmed connectivity, enable the features you want. If you decide to use CRC, go with 2-byte CRC (EN_CRC and CRC0 in the CONFIG register) since a 1-byte CRC will miss 1 in every 256 bit errors vs 1 in 65,536 for a 2-byte CRC.
Undocumented registers
Register 6 (RF_SETUP) seems to be a 9-bit register. Attempting to write 2 bytes to the register of all 1s (ff ff) followed by a read results in the following response:0e ff 01
For other registers documented as being a single byte, attempts to read multiple bytes results in the same byte being repeated after the status byte.
There's also an undocumented multi-byte register at address 1e. I found some code online that refers to this register as AGC_CONFIG, which implies it is for automatic gain control. I could not find any documentation on how to use this register. With my modules, the first three bytes of this register defaulted to 6d 66 05. Reading them back after writing all 1s resulted in fd ff 07, so some bits are fixed at 0.
Sunday, September 14, 2014
On-chip decoupling capacitors
In virtually all of my micro-controller projects, I'll use 0.1uF ceramic capacitors between Vcc and Gnd. Depending on the power draw of the MCU and inductance on the power lines, they may not be necessary, but at a cost of a penny or less each there's little reason not to use them.
I remembered seeing CPUs that have on-chip decoupling capacitors, and thought it would be nice if the MCUs I'm using had the same. When working with small projects on mini breadboards, not having to find space for the decoupling cap would be convenient. It would also save me the trouble of digging through my disorganized collection of components looking for that extra capacitor.
My first idea was to glue a 0805 (2mm x 1.25mm) MLCC to the top of the chip, and then solder 30AWG wire-wrap to the power and ground leads. I used contact cement, and although it seemed secure after drying for about 30 minutes, once I added flux and touched it with my soldering iron tip it moved freely. Then I tried a small drop of super glue, but for some reason it wasn't dry after an hour; maybe it was defective. If someone knows of a glue that would work well, let me know in the comments or send me an email.
Even after drying for a day, neither the contact cement nor the super glue would securely hold the capacitors while I tried to solder them. With the help of a pair of tweezers I was able to solder a MLCC to the top of an ATtiny85 as shown in the photo above.
For 28-pin DIP AVR MCUs that have ground and power on adjacent pins, the job is a lot easier. Here's a ATtiny88-PU with a 0805 MLCC:
The easiest method I came up with doesn't require any glue. I trimmed, then soldered the leads of a ceramic disc capacitor to the power and ground pins of an ATtiny84a:
2014/10/20 Update:
I found information about a high-temperature component adhesive which indicates typical cyanoacrylate adhesive is stable only up to 82C. For something that is readily available in hardware stores, I may try silicone adhesive, or even BBQ paint.
Friday, September 12, 2014
Inside the "$10" Rockchip TV dongle
Last year Rockchip demoed a "$10" Miracast/DLNA TV dongle. The $10 was not a target retail price, but probably the BOM cost. They can be found for under $20 including shipping from China. I bought one and posted a review of the Miracast and DLNA functionality. In this post I'll document a basic teardown of the dongle, along with instructions on setting up a root console connection.
There are no screws holding the dongle together; the top and bottom of the case simply snap together. The RK2928 is underneath a small heat spreader. Next to it is a Spectek PE937-15E 256MB DDR3 chip. the rest of the components appear to be for power regulation.
On the bottom of the board is a Winbond serial flash chip and a Realtek rtl8188ETV USB wifi module. Small pads labeled TX, RX, and GND are visible near the edge of the board. They are obviously for a serial console. I soldered some 30AWG wire-wrap wire to the pads, and to some 0.1" header pins. With my soldering iron I melted an opening in the plastic case where I hot glued the header pins.
After putting the dongle back together, I connected the serial console to a USB-TTL serial adapter, and powered up the module. I could see the Rx LED on the serial module flickering, indicating it was receiving console output data. I started a terminal program, and was not seeing anything when I tried 9600 and 19,200bps. Then I remembered my logic analyzer has a serial baud rate detection. I hooked it up, ran a capture while the dongle was booting, and found the baud rate was 115,200kbps.
After setting the terminal program to 115,200, I could see what I recognized as the console output of a Linux kernel. When the output stopped, I saw a "#" indicating a root shell prompt - no password required. However the prompt wouldn't last long before the dongle seemed to reboot and repeat the start-up console output.
I first checked the board to make sure nothing was shorting from the connections I made to the serial port pads. I then used a home-made USB cable with exposed power connections to measure the voltage. I found the voltage was briefly dipping below 4.5V when the dongle was rebooting. The dongle's peak power draw was too much for the PC USB port. My solution was a 220uF capacitor connected to my home-made USB cable. With the capacitor added, the voltage stayed above 4.8V, and there were no reboots.
The Linux installation seems to be a stripped-down android image rather than a standard linux distribution. While the 256MB of RAM is ample for an embedded linux distribution, the 16MB of flash makes installing something like Picuntu Linux very difficult.
There are no screws holding the dongle together; the top and bottom of the case simply snap together. The RK2928 is underneath a small heat spreader. Next to it is a Spectek PE937-15E 256MB DDR3 chip. the rest of the components appear to be for power regulation.
On the bottom of the board is a Winbond serial flash chip and a Realtek rtl8188ETV USB wifi module. Small pads labeled TX, RX, and GND are visible near the edge of the board. They are obviously for a serial console. I soldered some 30AWG wire-wrap wire to the pads, and to some 0.1" header pins. With my soldering iron I melted an opening in the plastic case where I hot glued the header pins.
After putting the dongle back together, I connected the serial console to a USB-TTL serial adapter, and powered up the module. I could see the Rx LED on the serial module flickering, indicating it was receiving console output data. I started a terminal program, and was not seeing anything when I tried 9600 and 19,200bps. Then I remembered my logic analyzer has a serial baud rate detection. I hooked it up, ran a capture while the dongle was booting, and found the baud rate was 115,200kbps.
After setting the terminal program to 115,200, I could see what I recognized as the console output of a Linux kernel. When the output stopped, I saw a "#" indicating a root shell prompt - no password required. However the prompt wouldn't last long before the dongle seemed to reboot and repeat the start-up console output.
I first checked the board to make sure nothing was shorting from the connections I made to the serial port pads. I then used a home-made USB cable with exposed power connections to measure the voltage. I found the voltage was briefly dipping below 4.5V when the dongle was rebooting. The dongle's peak power draw was too much for the PC USB port. My solution was a 220uF capacitor connected to my home-made USB cable. With the capacitor added, the voltage stayed above 4.8V, and there were no reboots.
The Linux installation seems to be a stripped-down android image rather than a standard linux distribution. While the 256MB of RAM is ample for an embedded linux distribution, the 16MB of flash makes installing something like Picuntu Linux very difficult.
Sunday, August 24, 2014
A 5c lithium ion battery charger
My step-daughter lost the battery charger for her camera (for the second time in 2 yrs). It takes a few weeks for a new one to arrive from DealExtreme, and she was hoping to use the camera over the weekend. So I decided to hack something together.
As various sites explain, lithium-ion rechargeable batteries should be charged to 4.2 volts. USB ports provide 5V, so all I needed was a way to drop 5V down to 4.2 or less. Standard diodes have a voltage drop of 0.6 to 1.0 volts, so I pulled up the datasheet for a 1n4148, and looked at the I-V curve:
A standard USB port should provide up to 500mA of current, enough for charging a small camera battery. A fully-discharged li-ion battery is 3V, and will climb to 3.8V within minutes of the start of charging. Line 2 in the graph indicates a 1.2V drop at 350mA of current. Under load the voltage output of a USB port will drop a bit, so with 4.9V from the USB port and 3.8V drop at the battery, the charging current will be around 250mA (where 1.1V intersects line 2). Looking at the numbers, a single 1n4148 diode would work as a battery charge controller.
Connecting to the battery was the hardest part of the problem. I tried making some contacts out of 24Awg copper wire, but that didn't work. I though of bending a couple 90-degree header pins to fit the battery contact spacing, but I couldn't find my prototyping board to solder it into. I ended up tack-sodering a couple 26Awg wires to plug into a breadboard.
For a charge status indicator, I used a couple LEDs I had on hand. A 3V green LED in series with a 1.7V red LED start to glow visibly at 4V, and are moderately bright by 4.2V. The few mA of current bleed from the LEDs over 4V would ensure enough current through the diode to keep the forward voltage above 0.8V, and therefore keeping the charge voltage from going over 4.2V.
The results were quite satisfactory. After a few hours of charging, the voltage plateaued at 4.21V. I removed the wires I tack soldered to the tabs, and the battery was ready to be used. The same technique could be used with higher capacity batteries by using a different diode - a 1N4004 for example has a voltage drop of 1.0V at around 2A.
Thursday, August 21, 2014
Writing a library for the internal temperature sensor on AVR MCUs
Most modern AVR MCU's have an on-chip temperature sensor, however neither avr-libc nor Arduino provides a simple way to read the temperature sensor. I'm building wireless nodes which I want to be able to sense temperature. In addition to the ATtiny88's I'm currently using, I want to be able to use other AVRs like the ATmega328. With that in mind I decided to write a small library to read the on-chip temperature sensor.
I found a couple people who already did some work with the on-chip temperature sensor. Connor tested the Atmega32u4, and Albert tested the Atmega328. As can be seen from their code, each AVR seems to have slightly different ways of setting up the ADC to read the temperature. Neither the MUX bits nor the reference is consistent across different parts. For example on the ATtiny88, the internal voltage reference is selected by clearing the ADMUX REFS0 bit, while on the ATmega328 it is selected by setting both REFS0 and REFS1.
One way of writing code that compiles on different MCUs is to use #ifdef statements based on the type of MCU. For example, when compiling for the ATmega328, avr-gcc defines, "__AVR_ATmega328__", and when compiling for the ATmega168 it defines, "__AVR_ATmega168__". Both MCUs are in the same family (along with the ATmega48 & ATmega88), and therefore have the same ADC settings. Facing the prospect of a big list of #ifdef statements, I decided to look for a simpler way to code the ADC settings.
I looked through the avr-libc headers in the include/avr directory. Although there is no definitions for the MUX settings for various ADC inputs (i.e. ADC8 for temperature measurement on the ATtiny88), there are definitions for the individual reference and mux bits. After comparing the datasheets, I came up with the following code to define the ADC input for temperature measurement:
#if defined (REFS1) && !defined(REFS2) && !defined(MUX4)
// m48 family
#define ADCINPUT (1<<REFS0) | (1<<REFS1) | (1<<MUX3)
#elif !defined(REFS1) && !defined(MUX4)
// tinyx8
#define ADCINPUT (0<<REFS0) | (1<<MUX3)
#elif defined(REFS2)
// tinyx5 0x0f = MUX0-3
#define ADCINPUT (0<<REFS0) | (1<<REFS1) | (0x0f)
#elif defined(MUX5)
// tinyx4 0x0f = MUX0-3
#define ADCINPUT (0<<REFS0) | (1<<REFS1) | (1<<MUX5) | (1<<MUX1)
#else
#error unsupported MCU
#endif
From previous experiments I had done with the ATtiny85, I knew that the ADC temperature input is quite noisy, with the readings often varying by a few degrees from one to the next. The datasheets refer to ADC noise reduction sleep mode as one way to reduce noise, which would require enabling interrupts and making an empty ADC interrupt. I decided averaging over a number of samples would be easier way.
I don't want my library to take up a lot of code space, so I needed to be careful with how I do math. Douglas Jones wrote a great analysis of doing efficient math on small CPUs. To take an average requires adding a number of samples and then dividing. To correct for the ADC gain error requires dividing by a floating-point number such as 1.06, something that would be very slow to do at runtime. Dividing a 16-bit number by 256 is very fast on an AVR - avr-gcc just takes the high 8 bits. I could do the floating-point divide at compile time by making the number of additions I do equal to 256 divided by the gain:
#define ADC_GAIN 1.06
#define SAMPLE_COUNT ((256/ADC_GAIN)+0.5)
The ADC value is a 10-bit value representing the approximate temperature in Kelvin. AVRs are only rated for -40C to +85C operation, so a signed 8-bit value representing the temperature in Celcius is more practical. Subtracting 273 from the ADC value before adding it is all that is needed to do the conversion.
Calibration
I think one of the reasons people external thermistors or I2C temperature sensing chips instead of the internal AVR temperature sensor is the lack of factory calibration. As explained in Application Note AVR122, the uncalibrated readings from an AVR can be off significantly. Without ADC noise reduction mode and running at 16Mhz, I have observed results that were off by 50C.
My first thought was to write a calibration program which would be run when the AVR is a known temperature, and write the temperature offset value to EEPROM. Then when the end application code is flashed, the temperature library code would read the offset from EEPROM whenever the temperature is read. But a better way would be to automatically run the calibration when the application code is flashed. However, how could I do that?
In my post, Trimming the fat from avr-gcc code, I showed how main() isn't actually the first code to run after an AVR is reset. Not only does avr-gcc insert code that runs before main, it allows you to add your own code that runs before main. With that technique, I wrote a calibration function that will automatically get run before main:
// temperature at programming time
#define AIR_TEMPERATURE 25
__attribute__ ((naked))\
__attribute__ ((used))\
__attribute__ ((section (".init8")))\
void calibrate_temp (void)
{
if ( eeprom_read_byte(&temp_offset) == 0xff)
{
// temperature uncalibrated
char tempVal = temperature(); // throw away 1st sample
tempVal = temperature();
// 0xff == -1 so final offset is reading - AIR_TEMPERATURE -1
eeprom_write_byte( &temp_offset, (tempVal - AIR_TEMPERATURE) -1);
}
}
The complete code is available in my google code repository. To use it, include temperature.h, and call the temperature function from your code. You'll have to link in temperature.o as well, or just use my Makefile which creates a library containing temperature.o that gets linked with the target code. See test_temperature.c for the basic example program.
In my testing with a Pro Mini, the temperature readings were very stable, with no variation between dozens of readings taken one second apart. I also used the ice cube technique (in a plastic bag so the water doesn't drip on the board), and got steady readings of 0C after about 30 seconds.
Saturday, August 9, 2014
Global variables are good
It's a rather absolute statement, to the point of being ridiculous. However many embedded systems "experts" say global variables are evil, and they're not saying it tongue-in-cheek. In all seriousness though, I will explain how global variables are not only necessary in embedded systems, but also how they can be better than the alternatives.
Every embedded MCU I'm aware of, ARM, PIC, AVR, etc., uses globals for I/O. Flashing a LED on PB5? You're going to use PORTB, which is a global variable defining a specific I/O port address. Even if you're using the Wiring API in Arduino, the code for digitalWrite ultimately refers to PORTB, and other global IO port variables as well. Instead of avoiding global variables, I think a good programmer should localize their use when it can be done efficiently.
When using interrupt service routines, global variables are the only way to pass data. An example of this is in my post Writing AVR interrupt service routines in assembler with avr-gcc. The system seconds counter is stored in a global variable __system_time. Access to the global can be encapsulated in a function:
uint32_t getSeconds()
{
uint32_t long system_time;
cli();
system_time = __system_time;
sei();
return system_time;
}
On a platform such as the ARM where 32-bit memory reads are atomic, the function can simply return __system_time.
uint8_t CE_PIN = 3;
uint8_t CSN_PIN = 4;
Every embedded MCU I'm aware of, ARM, PIC, AVR, etc., uses globals for I/O. Flashing a LED on PB5? You're going to use PORTB, which is a global variable defining a specific I/O port address. Even if you're using the Wiring API in Arduino, the code for digitalWrite ultimately refers to PORTB, and other global IO port variables as well. Instead of avoiding global variables, I think a good programmer should localize their use when it can be done efficiently.
When using interrupt service routines, global variables are the only way to pass data. An example of this is in my post Writing AVR interrupt service routines in assembler with avr-gcc. The system seconds counter is stored in a global variable __system_time. Access to the global can be encapsulated in a function:
uint32_t getSeconds()
{
uint32_t long system_time;
cli();
system_time = __system_time;
sei();
return system_time;
}
On a platform such as the ARM where 32-bit memory reads are atomic, the function can simply return __system_time.
Global constants
Pin mappings in embedded systems are sometimes defined by global constants. When working with nrf24l01 modules, I saw code that would define pin mappings with globals like:uint8_t CE_PIN = 3;
uint8_t CSN_PIN = 4;
While gcc link-time optimization can eliminate the overhead of such code, LTO is not a commonly-used compiler option, and many people are still using old versions of gcc which don't support LTO. While writing a bit-bang uart in assembler, I also wrote a version that could be used in an Arduino sketch. The functions to transmit and receive a byte took a parameter which indicated the bit timing. I wanted to avoid the overhead of parameter passing and use a compile-time global constant.
Compile-time global constants are something assemblers and linkers have supported for years. In gnu assembler, the following directives will define a global constant:
.global answer
.equ answer 42
When compiled, the symbol table for the object file will contain an (A)bsolute symbol:
$ nm constants.o | grep answer
0000002a A answer
Another assembler file can refer to the external constant as follows:
.extern answer
ldi, r0, answer
There's no construct in C to define absolute symbols, so for a while I didn't have a good solution. Gcc supports inline assembler. I find the syntax rather convoluted, but after reading the documentation over, and looking at some other inline assembler code, I found something that works:
// dummy function defines no code
// hack to define absolute linker symbols using C macro calculations
static void dummy() __attribute__ ((naked));
static void dummy() __attribute__ ((used));
static void dummy(){
asm (
".equ TXDELAY, %[txdcount]\n"
::[txdcount] "M" (TXDELAYCOUNT)
);
asm (
".equ RXSTART, %[rxscount]\n"
::[rxscount] "M" (RXSTARTCOUNT)
);
asm (
".equ RXDELAY, %[rxdcount]\n"
::[rxdcount] "M" (RXDELAYCOUNT)
);
}
The inline assembler I used does not work outside function definitions, so I had to put it inside a dummy function. The naked attribute keeps the compiler from adding a return instruction at the end of the dummy function, and therefore no code is generated for the function. The used attribute tells the compiler not to optimize away the function even though it is never called.
Build constants
The last type of constants I'll refer to are what I think are best defined as build constants. One example would be conditionally compiled debug code, enabled by a compile flag such as -DDEBUG=1. Serial baud rate is another thing I think is best defined as a build constant, such as how it is done in optiboot's Makefile.
Wednesday, August 6, 2014
Breaking out a QFP Attiny88 AVR
I had lots of experience soldering through-hole parts, but not surface-mount. With the pin spacing of only 0.8mm, soldering individual pins with a standard soldering iron initially seemed like an impossibility. After reading some guides and watching a couple youtube videos, I realized I should be able to solder the QFP-32 chips with my trusty old pencil-style soldering iron.
Besides the QFP Atiny, I figured I'd get some passive SMD parts as well. I was surprised how cheap they are - 50c for 100 0.1uF ceramic capacitors and $3 for 1000 0805 resistors. I got a little carried away and even ordered a full reel of 5000 15K 0603 resistors that were on special for $5. Besides being more than I'll probably ever use, the 0603 size is almost too small for hand soldering. Even the 0805 parts, at .08" or 2mm long are a bit tricky to handle. The 0603 parts, at 1.6 by 08.mm, are the size of a bread crumb.
After all the parts arrived, I started by tinning the pads on the breakout board. That turned out to be a mistake since the leads from the tiny88 would slide off the solder bumps when I tried to solder the first lead. A dab of flux on the bottom of the chip helped keep it in place, but for the second chip I did I only tinned the pads in to opposite corners. I tack soldered one lead in one corner, adjusted it until it was straight, and then soldered the other corner.
Once the chip is held in place with two leads (double and triple-check it while it is easy to adjust), the rest of the leads can be soldered. On the first chip I tried I used too much solder, which caused bridging between some of the leads. So have some solder wick on hand. When I soldered the second board, I only tinned the tip of my iron, which was enough solder for about 4 leads, and avoided bridging. After the soldering is done check continuity between the leads and the DIP holes with a multimeter. Also check for shorts by testing the adjacent dip holes.
By my second chip I had no shorts or lack of continuity between leads and the breakout pads. What I did have was weak shorts - between 20 and 200K Ohms of resistance between some pins. More flux and re-soldering didn't help. The problem turned out to be the flux. For the second chip I couldn't find my new flux, so I used an old can of flux. Flux can be slightly conductive, but on old DIP parts with 1.5 to 2mm between leads, it's rarely an issue. The space between the pads on the breakout boards is only 0.2-0.3mm, and along their 3mm length the conductivity of the flux residue can add up. I was able to clean up the residue with acetone and an old toothbrush, and in the future I'll make sure to use low-conductivity flux designed for fine-pitch surface-mount parts.
On the side opposite the chip, the board has a ground plane and pads running along the breakout pins. The pad spacing is perfect for 0805 parts, so I was able to solder a 0.1uF cap between Vcc and the ground plane. Again I encountered a weak short, even though I hadn't used any flux. At first I wondered if my cheap soldering iron may be too hot and could have damaged the MLCC. This time the problem turned out to be a black residue on the the capacitor. Surface tension can cause small parts to pull up when they are soldered, so I had used a toothpick to press the capacitor to the board while I soldered the ends. The heat from the soldering iron charred the toothpick, leaving a black semi-conductive residue. Getting out the acetone and toothbrush again cleaned it up, and a note to get some anti-static tweezers the next time I order parts.
Among the SMD parts I ordered were some 0603 yellow LEDs. These were even worse to work with than the resistors. First, reading the polarity marks is difficult with the naked eye (or at least with my middle-aged eyes). Second they're much more fragile than resistors and capacitors. While trying to solder one of them, my iron slipped and melted off the plastic covering the LED die. On my first board I failed at soldering one of the surface-mount LEDs and a resistor between PB5 and Gnd. For the second board I used a through-hole red LED. I clipped the negative lead to go into the ground plane hole at one end of the board. I bent and clipped the positive lead so it could line up with a resistor on PB5. To avoid a short to the ground plane pad adjacent to the PB5 pin, I insulated it with some nail polish. Here's the finished board:
You might notice that the pin numbers don't seem to match up - AVcc is pin 18 on the tiny88, not 26. I intentionally rotated the chip 90 degrees so the SPI pins and AVcc were all on the same side. This way it's easy to use my breadboard programming cable.
Thursday, July 31, 2014
Busting up a breadboard
A few months ago I bought 10 mini breadboards for prototyping small electronics projects. I've noticed lots of other projects using these boards as well. In the past couple weeks I've encountered strange problems with voltage drops and transient signal fluctuations, which I initially thought were problems with my circuits. Eventually I started suspecting the breadboards.
One of the first things I did was measure resistance between pins that were connected by a 24AWG copper jumper wire. The resistance of the jumper wire is no more than 0.1Ohms, but to my surprise I found the resistance from pin to pin was 6.4Ohms. In case a bit of corrosion possibly reducing the conductance, I unplugged and plugged the jumper wire and pins a couple times. I even tried putting some acid flux on the pins and jumper wire, but could not get a significant change in resistance. Just from one end of a strip to the other (5 contact points) I was measuring as low as 0.4Ohms to as much as 2.1Ohms.
Most leads and connectors are made from copper, with tin or gold plating. Copper conducts very well, but oxidizes easily so tin or gold is used to protect the copper from corrosion. For the past number of years, the cost of copper has averaged over $3/lb, while stainless steel is about half the price. While stainless steel resists corrosion, it's resistance is about 40 times higher than copper. Since a breadboard with poor conductivity has limited usefulness, I decided to break apart one of the worst ones in the batch using a pair of wire cutters.
I pulled out one of the metal contact strips, and bent apart the fingers. It certainly felt less malleable than copper. I tried scraping the surface and snipped one of the fingers off, and the metal looked homogeneous. Most kitchen cutlery is made of stainless steel, and ifyou or your friends have ever done hot knives, you probably noticed the discoloration caused by heating. I took one of the metal strips outside and heated it with a propane torch. Here's the result:
Another possibility besides stainless steel is nickel plated phosphor bronze, like these mini breadboards sold by dipmicro. Phosphor bronze is close to copper color, and since the core of the metal looks the same as the outside, so I suspect the ones I received are not phosphor bronze. It conducts about 3 times better than stainless steel so this may be one of those situations where paying a bit more is worth the money...
One of the first things I did was measure resistance between pins that were connected by a 24AWG copper jumper wire. The resistance of the jumper wire is no more than 0.1Ohms, but to my surprise I found the resistance from pin to pin was 6.4Ohms. In case a bit of corrosion possibly reducing the conductance, I unplugged and plugged the jumper wire and pins a couple times. I even tried putting some acid flux on the pins and jumper wire, but could not get a significant change in resistance. Just from one end of a strip to the other (5 contact points) I was measuring as low as 0.4Ohms to as much as 2.1Ohms.
Most leads and connectors are made from copper, with tin or gold plating. Copper conducts very well, but oxidizes easily so tin or gold is used to protect the copper from corrosion. For the past number of years, the cost of copper has averaged over $3/lb, while stainless steel is about half the price. While stainless steel resists corrosion, it's resistance is about 40 times higher than copper. Since a breadboard with poor conductivity has limited usefulness, I decided to break apart one of the worst ones in the batch using a pair of wire cutters.
I pulled out one of the metal contact strips, and bent apart the fingers. It certainly felt less malleable than copper. I tried scraping the surface and snipped one of the fingers off, and the metal looked homogeneous. Most kitchen cutlery is made of stainless steel, and if
Another possibility besides stainless steel is nickel plated phosphor bronze, like these mini breadboards sold by dipmicro. Phosphor bronze is close to copper color, and since the core of the metal looks the same as the outside, so I suspect the ones I received are not phosphor bronze. It conducts about 3 times better than stainless steel so this may be one of those situations where paying a bit more is worth the money...
Monday, July 28, 2014
RK2928 wireless TV dongle
I recently purchased a wireless TV dongle for $18 (10% off the regular $20 price). Now they're even selling for $16 on Aliexpress. For power a microUSB-USB cable is included to plug it into a USB port on the TV or into a USB power supply. If your TV supplies 5V power to the HDMI connector (most TVs don't), the dongle will draw power directly from the HDMI port.
It came with a single sheet double-sided "user guide". There's no reference to the manufacturer, though after some searching I found it is functionally identical to the Mocreo MCast. I found the setup somewhat confusing, as the dongle works in either miracast or DLNA/AirPlay mode. Pressing the Fn button switches between modes.
The miracast mode is used to mirror your tablet or phone display to the TV. Android 4.2 and above supports miracast. In 4.4, it's a display option called "Cast screen". The dongle appears as a wifi access point (with an SSID of Lollipop), and to use miracast you must connect to this access point. This would be quite useful for presentations. I used to do corporate training, and a dongle that can plug into the back of a projector avoids the problems associated with long VGA or HDMI cables. Miracast is not fast enough for smooth video playback - for that you need UPnP/DLNA.
To setup DLNA, it is necessary to first connect to the Lollipop access point, and then browse to the IP address (192.168.49.1) of the dongle. The configuration page allows you to scan for your wifi router, and provide the password to connect. When you are done, you'll have to switch your tablet connection to your wifi router.
At this point, if you don't have a UPnP/DLNA server and control point, you won't be able to do much with the dongle, since it's not Chromecast compatible. XBMC is a popular DLNA server, and even Windows 7 includes a DLNA server. Media players like the old Seagate Theatre+ will also work as a DLNA server if you have an attached hard drive.
Once you have a server, you'll also need a controller aka control point app for your android device. The manual that came with my dongle recommended iMedia Share, but this app only supports sharing media that is already on your tablet. Finding a decent app was rather frustrating, as the first couple free apps I tried, such as Allcast, are basically a teaser for the paid app.
After some searching I found Controldlna which does (mostly) work. I was able to browse my DLNA server, and direct the dongle to stream video from the DLNA server. The play/pause function in controldlna was flaky (frequently stopping the video rather than pause), so I had to use the dongle's web page controls. Similar to the dongle's setup page, there's a page that has play/pause/stop buttons.
Playback of a 1mbps h.264 encoded HD video was very smooth. There was a problem with the aspect ratio though. The video was 2.25:1 aspect ratio, but the dongle displayed it at full screen 16:9, making the video look vertically stretched.
What is lacking is the ability to browse online videos (like youtube) and direct the dongle to play them. The DLNA protocol supports arbitrary URLs, so the only barrier to playing online video is a control point app that allows selecting videos from the web. If I can't find one, it may be time to see how my Java coding experience translates into writing Android apps.
The dongle has lots of potential, but the software is lacking at this point. Although it's not something for your average person who wants to watch digital video on their TV, for the technical folks I think it's worth the money.
It came with a single sheet double-sided "user guide". There's no reference to the manufacturer, though after some searching I found it is functionally identical to the Mocreo MCast. I found the setup somewhat confusing, as the dongle works in either miracast or DLNA/AirPlay mode. Pressing the Fn button switches between modes.
The miracast mode is used to mirror your tablet or phone display to the TV. Android 4.2 and above supports miracast. In 4.4, it's a display option called "Cast screen". The dongle appears as a wifi access point (with an SSID of Lollipop), and to use miracast you must connect to this access point. This would be quite useful for presentations. I used to do corporate training, and a dongle that can plug into the back of a projector avoids the problems associated with long VGA or HDMI cables. Miracast is not fast enough for smooth video playback - for that you need UPnP/DLNA.
To setup DLNA, it is necessary to first connect to the Lollipop access point, and then browse to the IP address (192.168.49.1) of the dongle. The configuration page allows you to scan for your wifi router, and provide the password to connect. When you are done, you'll have to switch your tablet connection to your wifi router.
At this point, if you don't have a UPnP/DLNA server and control point, you won't be able to do much with the dongle, since it's not Chromecast compatible. XBMC is a popular DLNA server, and even Windows 7 includes a DLNA server. Media players like the old Seagate Theatre+ will also work as a DLNA server if you have an attached hard drive.
Once you have a server, you'll also need a controller aka control point app for your android device. The manual that came with my dongle recommended iMedia Share, but this app only supports sharing media that is already on your tablet. Finding a decent app was rather frustrating, as the first couple free apps I tried, such as Allcast, are basically a teaser for the paid app.
After some searching I found Controldlna which does (mostly) work. I was able to browse my DLNA server, and direct the dongle to stream video from the DLNA server. The play/pause function in controldlna was flaky (frequently stopping the video rather than pause), so I had to use the dongle's web page controls. Similar to the dongle's setup page, there's a page that has play/pause/stop buttons.
Playback of a 1mbps h.264 encoded HD video was very smooth. There was a problem with the aspect ratio though. The video was 2.25:1 aspect ratio, but the dongle displayed it at full screen 16:9, making the video look vertically stretched.
What is lacking is the ability to browse online videos (like youtube) and direct the dongle to play them. The DLNA protocol supports arbitrary URLs, so the only barrier to playing online video is a control point app that allows selecting videos from the web. If I can't find one, it may be time to see how my Java coding experience translates into writing Android apps.
The dongle has lots of potential, but the software is lacking at this point. Although it's not something for your average person who wants to watch digital video on their TV, for the technical folks I think it's worth the money.
Tuesday, July 15, 2014
Testing 433Mhz RF antennas with RTL-SDR
A couple months ago I picked up a RTL2832U dongle to use with SDR#. I've been testing 433Mhz RF modules, and wanted to figure out what kind of wire antenna works best.
Antenna theory is rather complicated, and designing an efficient antenna involves a number of factors including matching the output impedance of the transmitter. Since I don't have detailed specs on the RF transmitter modules, I decided to try a couple different antenna designs, and use RTL-SDR to measure their performance.
I started with a ~16.5cm (6.5" for those who are stuck on imperial measurements) long piece of 24awg copper wire (from a spool of ethernet cable). One-quarter wavelength would be 17.3cm (300m/433.9Mhz), however a resonant quarter-wave monopole antenna is supposed to be slightly shorter. I started up SDR#, turned off AGC and set the gain fixed at 12.5dB. The signal peaked at almost -10db:
The next thing I tried was coiling the antenna around a pen in order to make it a helical antenna. This made the performance a lot (>10dB) worse:
I also tried a couple uncommon variations like a loop and bowtie antenna. All were worse than the monopole.
The last thing I tried was a dipole, by adding another 16.5cm piece of wire soldered to the ground pin on the module.This gave the best performance of all, nearly 10dB better than the monopole. An impedance-matched half-wave dipole is supposed to have about 3dB wrose gain than a quarter-wave monopole. Given the improvement, I suspect the output impedance on the 433Mhz transmit modules is much closer to the ~70Ohm impedance of a half-wave dipole than it is to the ~35Ohm impedance of a quarter-wave monopole.
Have any other ideas on how to improve the antenna design? Leave a comment.
Last minute update: I tried a 1/4-wave monopole wire antenna on the RTL dongle, and got 2-3dB better signal reception at 433Mhz than the stock antenna. I tried a full-wave (69cm) wire antenna, and it performed better than the stock antenna, but slightly worse than the 1/4-wave monopole.
Antenna theory is rather complicated, and designing an efficient antenna involves a number of factors including matching the output impedance of the transmitter. Since I don't have detailed specs on the RF transmitter modules, I decided to try a couple different antenna designs, and use RTL-SDR to measure their performance.
I started with a ~16.5cm (6.5" for those who are stuck on imperial measurements) long piece of 24awg copper wire (from a spool of ethernet cable). One-quarter wavelength would be 17.3cm (300m/433.9Mhz), however a resonant quarter-wave monopole antenna is supposed to be slightly shorter. I started up SDR#, turned off AGC and set the gain fixed at 12.5dB. The signal peaked at almost -10db:
The next thing I tried was coiling the antenna around a pen in order to make it a helical antenna. This made the performance a lot (>10dB) worse:
I also tried a couple uncommon variations like a loop and bowtie antenna. All were worse than the monopole.
The last thing I tried was a dipole, by adding another 16.5cm piece of wire soldered to the ground pin on the module.This gave the best performance of all, nearly 10dB better than the monopole. An impedance-matched half-wave dipole is supposed to have about 3dB wrose gain than a quarter-wave monopole. Given the improvement, I suspect the output impedance on the 433Mhz transmit modules is much closer to the ~70Ohm impedance of a half-wave dipole than it is to the ~35Ohm impedance of a quarter-wave monopole.
Have any other ideas on how to improve the antenna design? Leave a comment.
Last minute update: I tried a 1/4-wave monopole wire antenna on the RTL dongle, and got 2-3dB better signal reception at 433Mhz than the stock antenna. I tried a full-wave (69cm) wire antenna, and it performed better than the stock antenna, but slightly worse than the 1/4-wave monopole.
Controlling HD44780 displays
This post is a follow-on to my earlier post, What's up with HD44780 LCD displays?
A lesson in reading datasheets
A search for details on programming the HD44780 will result in many different ways of doing it. I think one of the reasons is that datasheets are often ambiguous. I'd say the HD44780U datasheet is not only ambiguous, it's not well organized. When trying to understand the bus timing characteristics, you have to flip back and forth between the tables on pg. 52 and the diagrams on pg. 58. After doing that too many times, I added the timing to the diagram:
When working with MCUs clocked up to 20Mhz, the minimum instruction time is 50ns. Therefore if one instruction sets the R/W line low, and the next instruction sets E high, there will be at least 50ns between the two events. Therefore when writing control code, it is safe to ignore tAS, tAH, and tH, which I've written in green. If the next instruction after setting E high sets it back to low, the Pulse Width for E High (PW-EH) will be only 50ns. To ensure the minimum pulse width is met, E should be kept high for at least 5 instruction times at 20Mhz or at least 4 instruction times at 16Mhz.
When analyzing the datasheet, it's helpful to engage in critical thinking, even if the author of the datasheet didn't! The datasheet shows what timing is sufficient, however it's not completely clear on what timing is necessary. For example, look at the RS line. The timing diagram indicates it is sufficient to set the RS line 40ns before the E pulse, and hold it for 10ns after the E pulse. Having a good idea of how the chip works based on the block diagram on pg 3, I'd say it's not necessary to assert RS until just before the falling edge of the E pulse.
One of the most frequent problems people seem to have controlling these devices is the initialization sequence. In this matter the datasheet is not only unclear on what is necessary, it is even contradictory in places. After reading about the problems people have encountered and experimenting with the devices myself, I believe I can condense what's sufficient to initialize the devices down to 6 steps:
- wait 15ms
- send command (1 E pulse) to set 8-bit interface mode
- wait 65us
- send command (1 E pulse) to set 8-bit interface mode
- wait 65us
- send command (1 E pulse) to set 4-bit interface mode
The first 15ms wait is from pg 23 of the datasheet referring to start-up initialization taking 10ms. This is probably dependent upon the internal oscillator frequency, which is typically 270kHz. Page 55 of the datasheet shows how the frequency depends upon the voltage and the value of the external oscillation resistor Rf.
The modules have a 91k resistor on the back in the form of a small SMD part marked 913 (91 x 10^3). With 5V power the minimum frequency would be 200kHz, or 35% slower than the typical timing listed in the datasheet. I've put a red dot on the 3V graph to show what can happen if you try to a module at 3V that has 91k for Rf; the frequency could be as low as 150kHz, so commands could take almost twice as long as the typical values listed in the datasheet. I bet this is one of the reasons people sometimes have problems using these displays - if the controlling code is based on the minimum frequency at 5V and the device is run at 3V, it may fail to work.
Peter Fleury's HD44780 library, and some others wait longer than the datasheet specified times to cover these differences. For instructions the typical time required is 37us, so waiting 65us should be a safe value. I based this on the 200kHz minimum frequency at 5V with 30% added for an extra margin of error.
The reason for steps 2-6 is because the device could be in 4 or 8-bit mode when it powers up. The datasheet says, "If the electrical characteristics conditions listed under the table Power Supply Conditions Using Internal Reset Circuit are not met, the internal reset circuit will not operate normally and will fail to initialize the HD44780U." All the devices I've seen do not initialize as described on pg. 23 of the datasheet. I suspect they use cheaper clones of the HD44780U that didn't bother with the internal reset circuit.
If the device starts up in 4-bit mode, it will take two pulses on the E line to read 8 bits of an instruction. The lower 4 bits of the instruction to set 4 or 8-bit mode do not matter. By setting the high nibble (D4-D7) to the instruction to set 8-bit mode and then toggling E twice, it will either be processed as the same instruction twice in 8-bit mode, or one instruction in 4-bit mode.
Another HD44780 AVR library was written by Jeorg Wunsch. The delays between instructions is 37us, so it is likely to have timing problems with displays that are running at less than the typical 270kHz frequency. Both Peter's and Jeorg's code can write a nibble of data at a time. Here's the code from Peter's library:
dataBits = LCD_DATA0_PORT & 0xF0;
LCD_DATA0_PORT = dataBits |((data>>4)&0x0F);
Corruption can occur if an ISR runs which changes the state of one of the high bits of LCD_DATA_PORT while that section of code executes. In my LCD control code if LCD_ISR_SAFE is defined, interrupts are disabled while the nibble is written. Another difference with my library code is it doesn't use the R/W (read/write) line. Since the initialization code can't read the busy flag and has to used timed IO, there's almost no extra code to make all of the IO timed. Overall the code is much smaller without reading the busy flag, and needs 6 instead of 7 IO pins to control the LCD. Just short the RW line (pin 5 on the LCD module) to ground.
I did not use any explicit delay between turning on and off the E line. Looking at the disassembled code, the duration of the E pulse is 7 CPU cycles, which would ensure the E pulse is at least 230ns even on an AVR overclocked to 30Mhz. In addition to the control code, I've written a small test program.
test_lcd.c
lcd.c
dataBits = LCD_DATA0_PORT & 0xF0;
LCD_DATA0_PORT = dataBits |((data>>4)&0x0F);
Corruption can occur if an ISR runs which changes the state of one of the high bits of LCD_DATA_PORT while that section of code executes. In my LCD control code if LCD_ISR_SAFE is defined, interrupts are disabled while the nibble is written. Another difference with my library code is it doesn't use the R/W (read/write) line. Since the initialization code can't read the busy flag and has to used timed IO, there's almost no extra code to make all of the IO timed. Overall the code is much smaller without reading the busy flag, and needs 6 instead of 7 IO pins to control the LCD. Just short the RW line (pin 5 on the LCD module) to ground.
I did not use any explicit delay between turning on and off the E line. Looking at the disassembled code, the duration of the E pulse is 7 CPU cycles, which would ensure the E pulse is at least 230ns even on an AVR overclocked to 30Mhz. In addition to the control code, I've written a small test program.
test_lcd.c
lcd.c
Thursday, July 10, 2014
What's up with HD44780 LCD displays?
There's lots of projects and code online for using character LCD displays based on these controllers, particularly the ones with 2 rows of 16 characters (1602). They're low power (~1mA @5V), and for only $2 each, they're the cheapest LCD modules I've found. The controllers are over 20 years old, so as a mature technology you might think there's not much new to learn about them. Well after experimenting with them for a few days, I've discovered a few things that I haven't seen other people discuss.
Before getting into software, the first thing you need to do after applying power is set the contrast voltage (pin3). The amount of contrast is based on the difference between the supply voltage(VDD) and VE. The modules have a ~10K pullup resistor on VE (pin3), so with nothing attached to it there is no display. If VE is grounded when VDD is 5V, the contrast can be too high and you may only see black blocks. With a simple 1N4148 diode between VE and ground, there's 0.6V on VE, and a good combination of contrast and viewing angle.
Like many other projects, I chose to use the display in 4-bit (nibble) mode, saving 4 pins on the Pro Mini. There's also more software available to drive these displays in nibble mode than byte mode. I like to keep wiring simple, so I spent some time figuring out the easiest way to connect the LCD module to my Pro Mini board. After noticing I could line up D4-D7 on the module with pins 4-7 of the Pro Mini, here's what I came up with (1602 module on the left and the Pro Mini on the right):
It fits on a mini-breadboard and only requires 3 jumper wires - one for ground, one for power, and one for RS (connecting to pin 2 on the Pro Mini). If you use the pro mini bootloader to program the chip, you may have to temporarily unplug the LCD since it connects to the UART lines. If you use a breadboard programming cable to flash the AVR using SPI, then you can leave the module in.
These modules are also available with LED backlights powered from pin 15 and 16. Those pins line up with pins 8 and 9 on the Pro Mini, which could be used to control the backlight.
These modules are also available with LED backlights powered from pin 15 and 16. Those pins line up with pins 8 and 9 on the Pro Mini, which could be used to control the backlight.
Power Usage
A datasheet I found for a 1602 LCD module lists the power consumption as 1.1mA at 3V. To measure the actual power usage, I put a 68-Ohm resistor in series with the 5V supply, and connected a 270 Ohm resistor between Gnd & VE. The voltage drop on the power line was 45mV, and solving for I in V=IR means 0.66mA of current. The voltage drop across the VE resistor was 120mV, so 2/3 of the power consumption is from the VE current, with an internal pullup resistance of 11.2K. Most circuits I've seen for these modules recommend a 10K trimpot for controlling VE, which would add another 500uA (5V/10K) to the power consumption.
The internal pullups on the data, RW, and RS lines are another factor in power consumption. If the AVR pins are left in output mode, four data lines and RS set low will draw 125uA each (datasheet pg. 51) a total of 750uA. A good HD44780 library will set the AVR pins on those lines high (or to input mode) when not in use. Speaking of software, it's a good point to finish this post and start on my next post about AVR software to control these displays.
Wednesday, July 9, 2014
Writing AVR interrupt service routines in assembler with avr-gcc
For writing AVR assembler code, there are two free compilers to choose from. The Atmel AVR Assembler and the Gnu assembler. While they both support the same instruction set, there are some differences in assembler syntax that differ between the two. The assembler is included with gcc packages like the Atmel AVR toolchain, so if you're already writing AVR programs in C or C++, then you don't need to install anything extra in order to start writing in assembler.
Documents you should refer to for writing interrupts are the avr-libc manual entry for interrupt.h and the data sheet for the AVR MCU you are using. I'll be using an Arduino Pro Mini clone to write a timer interrupt, which uses the Atmega328p MCU.
The purpose of the ISR I'm wiring is to maintain a system clock that counts each second. I'll use timer/counter2, which supports clocking off an external 32kHz watch crystal or the system clock oscillator. For now I'll write the code based on running off the Pro Mini's 16Mhz external crystal.
With a 16Mhz system clock and a 8-bit timer, it's impossible to generate an interrupt every second. Using a prescaler of 256 and a counter reload at 250, an interrupt will be generated every 4ms, or 250 times a second. Every 250th time the ISR gets called it will need to increment a seconds counter. The efficiency of the setup code doesn't matter, so I've written it in C:
// normal mode, clear counter when count reached
TCCR2A = (1<<WGM21);
TCCR2B = CSDIV256;
OCR2A = 250; // reset timer when count reached
TIMSK2 = (1<<OCIE2A); // enable interrupt
The first lines for the assembler source, as I do with all my AVR assembler programs, will be the following:
#define __SFR_OFFSET 0
#include <avr/io.h>
This will avoid having to use the _SFR_IO_ADDR macro when using IO registers. So instead of having to write:
in r0, _SFR_IO_ADDR(GPIOR0)
I can write:
in r0, GPIOR0
The ISR needs to keep a 1-byte counter to count 250 interrupts before adding a second. There's almost 32 million seconds in a year, so a 4-byte counter is needed. These counters could be stored in registers or RAM. In the avr-libc assembler demo project 3 registers are dedicated to ISR use, making them unavailable to the C compiler. Instead of tying up 5 registers, the ISR will use the .lcomm directive to reserve space in RAM. The seconds timer (__system_time) will be marked global so it can be accessed outside the ISR.
; 1 byte variable in RAM
.lcomm ovfl_count, 1
Documents you should refer to for writing interrupts are the avr-libc manual entry for interrupt.h and the data sheet for the AVR MCU you are using. I'll be using an Arduino Pro Mini clone to write a timer interrupt, which uses the Atmega328p MCU.
The purpose of the ISR I'm wiring is to maintain a system clock that counts each second. I'll use timer/counter2, which supports clocking off an external 32kHz watch crystal or the system clock oscillator. For now I'll write the code based on running off the Pro Mini's 16Mhz external crystal.
With a 16Mhz system clock and a 8-bit timer, it's impossible to generate an interrupt every second. Using a prescaler of 256 and a counter reload at 250, an interrupt will be generated every 4ms, or 250 times a second. Every 250th time the ISR gets called it will need to increment a seconds counter. The efficiency of the setup code doesn't matter, so I've written it in C:
// normal mode, clear counter when count reached
TCCR2A = (1<<WGM21);
TCCR2B = CSDIV256;
OCR2A = 250; // reset timer when count reached
TIMSK2 = (1<<OCIE2A); // enable interrupt
sei();
#define __SFR_OFFSET 0
#include <avr/io.h>
This will avoid having to use the _SFR_IO_ADDR macro when using IO registers. So instead of having to write:
in r0, _SFR_IO_ADDR(GPIOR0)
I can write:
in r0, GPIOR0
The ISR needs to keep a 1-byte counter to count 250 interrupts before adding a second. There's almost 32 million seconds in a year, so a 4-byte counter is needed. These counters could be stored in registers or RAM. In the avr-libc assembler demo project 3 registers are dedicated to ISR use, making them unavailable to the C compiler. Instead of tying up 5 registers, the ISR will use the .lcomm directive to reserve space in RAM. The seconds timer (__system_time) will be marked global so it can be accessed outside the ISR.
; 1 byte variable in RAM
.lcomm ovfl_count, 1
; 4 byte (long) global variable in RAM
.lcomm __system_time, 4
.global __system_time
As an 8-bit processor, the AVR cannot increment a 32-bit second counter in a single operation. It does have a 16-bit add instruction (adiw), but not 32-bit. So it will have to be done byte-by byte. Since it doesn't have an instruction for add immediate, the quickest way to add one to a byte is to subtract -1 from it. For loading and storing the bytes between RAM and registers, the 4-byte lds and sts instruction could be used. Loading Z with a pointer allows the 2-byte ld and st instructions to be used, and making use of the auto-increment version of the instructions allows a single load/store combination to be used in a loop. With that in mind, here's the smallest (and fastest) code I could come up with to increment a 32-bit counter stored in RAM:
ldi ZL, lo8(__system_time)
ldi ZH, hi8(__system_time)
loop:
ld r16, Z
sbci r16, -1 ; subtract -1 = add 1
st Z+, r16
brcc loop
Since the 8-bit overflow counter and the seconds counter are sequential in memory, a reload of the Z counter can be avoided:
ldi ZL, lo8(ovfl_count)
ldi ZH, hi8(ovfl_count)
ld r16, Z
cpi r16, 250
brne loop
clr r16 ; reset counter
loop:
sbci r16, -1 ; subtract -1 = add 1
st Z+, r16
ld r16, Z
brcc loop
For testing the timer, the low byte of the seconds count is written to PORTB. If it works, the LED on the Pro Mini's pin 13 (PB5) will toggle every 2^5 = 32 seconds.
DDRB = 0xff; // output mode
while (1) {
PORTB = __system_time & 0xff;
}
After compiling, linking, and flashing the code to the Pro Mini, it didn't work - the LED on PB5 never flashed. I looked at the disassembled code and couldn't find anything wrong. To make sure the while loop was running, I added a line to toggle PB0 after the write to PORTB. Flashing the new code and attaching an LED to PB0 confirmed the loop was running. Adding code to set a pin inside the ISR confirmed it was running. The problem was the counter wasn't incrementing. After going over the AVR instruction set again, I realized the mistake had to do with the inverted math. When the value of ovfl_count in r16 is less than 250, the branch if not equal is taken, continuing execution at the sbci instruction. However, since the carry flag is set by the cpi instruction when r16 is less than 250, the sbci instruction subtracts -1 and subtracts carry for a net result of 0. The solution I came up with was changing the loop to count from 6 up to 0:
cpi r16, 0
brne loop
ldi r16, 6 ; skip counts 1-6
With that change, it worked!
I cleaned up the code and posted t32isr.S and timer-asm.c to my google code repository. In a future post I'll add some code to compute the date and time from the seconds count.
Since the shutdown of Google Code, you can find the source on Github.
https://github.com/nerdralph/nerdralph/tree/master/avr
Since the shutdown of Google Code, you can find the source on Github.
https://github.com/nerdralph/nerdralph/tree/master/avr