The first step to building a sensor network is establishing the wireless interface for communication. Xbees can operate in two different modes: Transparent and API. Transparency mode causes the modules to act as a "cable replacement", so each byte sent to an Xbee is received by all Xbees (unless the destination address has been modified in the AT parameters, but by default messages are broadcast to all modules). In API mode, we construct custom packets ourselves, in which we can modify various options, specify a destination address, request acknowledgement, or limit the number of hops a message can take in a multihop network.
If you aren't familiar with Xbee API mode, take a look at Chapter 6 (page 35) in the documentation here:
ftp://ftp1.digi.com/support/documentation/90000866_A.pdf
The following code is snippets from what will soon be the Xbee avr-gcc library. I've tested the functions used here, and as far as I can tell, everything works just fine. A side note: all USART communications are currently set for 9600 baud, which I plan to increase later, but it's a convenient rate for developing.
Currently in my header file I have a structure that I store all the incoming packet data into:
typedef struct{
int len;
char api_identifier;
long DH;
long DL;
int source_addr_16bit;
char options;
char data[20];
char checksum;
} RxPacket;
I am only using the Zigbee Tx request (API Identifier: 0x10) packet for sending information currently, so this structure has everything I need to store the Zigbee Rx packet (API Identifier: 0x90) that is received.
The following code is used to send an array constructed in the form of an API frame:
/* Routine to send a byte through USART0
*
* This routine polls the UCSR0A register until it indicates it is ready
* for a new byte to be transmitted, then loads the new byte into the UDR0
* register for transmission.
*/
void USART_vSendByte(char Data)
{
// Wait if a byte is being transmitted
while ((UCSR0A & (1 << UDRE0)) == 0) {}; // Do nothing until UDR is ready for more data to be written to it
// Transmit data
UDR0 = Data;
}
void send_Msg(char *data, int len)
{
cli(); //disable interrupts
//Generate checksum
char checksum;
int counter = 0, sum = 0;
for(counter = 0; counter <= len - 1; counter++)
sum += data[counter];
//Checksum is calculated by adding the data values together, and subtracting the
//last 8 bits from 0xFF.
checksum = 0xFF - (sum & 0x00FF);
//Transmit data
USART_vSendByte(0x7E); //Start delimiter
USART_vSendByte(8 >> len); //Length MSB
USART_vSendByte(len); //Length LSB
for(counter = 0; counter <= len - 1; counter++) //Transmit data
USART_vSendByte(data[counter]);
USART_vSendByte(checksum); //Transmit checksum
sei(); //enable interrupts
}
The USART_Sendbyte() routine simply writes a character to USART0 and checks to make sure that it doesn't overwrite a previous character being sent. Obviously the USART needs to be initialized before this function will work.
The send_Msg() routine expects an array containing a valid API frame and the length of the array to be passed to it. It then tacks on the start delimeter, the length of the entire packet, and checksum at the end. The checksum is initially calculated according the Xbee documentation.
Sometimes for testing I will pass a pre-constructed array such as
char test[] = {0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 'T', 'E', 'S', 'T' };
to send_Msg(), but for actual applications we will use a routine to construct this array for us.
The following function constructs an API frame according to the specifications of API Identifier 0x10 and passes the array to our send_Msg() routine:
void ZigBee_TX_Request(char Frame_ID, long DH, long DL, int _16bitAddr, char Hops, char Options, char *RF_Data, int len )
{
int i; // counting variable
char buff[30]; //temporary buffer for transmitting
// ZigBee Transmit Request API Identifier
buff[0] = 0x10;
// Identifies the UART data frame for the host to correlate with a
// subsequent ACK (acknowledgement). Setting Frame ID to ‘0' will disable response frame.
buff[1] = Frame_ID;
// MSB first, LSB last. Broadcast = 0x000000000000FFFF
buff[2] = (DH >> 24);
buff[3] = (DH >> 16);
buff[4] = (DH >> 8);
buff[5] = DH;
buff[6] = (DL >> 24);
buff[7] = (DL >> 16);
buff[8] = (DL >> 8);
buff[9] = DL;
// 16 bit address
buff[10] = (_16bitAddr >> 8);
buff[11] = _16bitAddr;
// Number of hops for message to take
buff[12] = Hops;
// Options
buff[13] = Options;
for(i = 0; i < len; i++)
buff[14+i] = RF_Data[i];
send_Msg(buff, 14+len);
}
limitations of this function include the specified buffer size, but that will be something that can be changed by adding a #define to the header file of the library for better elegance.
Now that we've covered how to send a packet, let's look at receiving a packet.
RxPacket rx_pkt;
ISR(USART_RX_vect)
{
cli();
if(UDR0 == 0x7E)
receive_Msg(&rx_pkt);
else
{
sei();
return;
}
}
This is the interrupt service routine for USART0 Rx. And a global RxPacket variable from the structure shown earlier. In this ISR we check to see if the incoming byte was the start delimeter, 0x7E. If it is, we know we are receiving a packet, and we call the receive_Msg() routine. Otherwise the byte is disregarded.
We also disable interrupts (using cli() and sei(), these are avr-gcc shortcuts for disabling and enabling all interrupts, respectively) when receiving bytes so the function doesn't get preempted by another ISR. I've read that Atmega ISRs disable all other interrupts by default when executing an ISR, but I'm paranoid so I disable them anyways. Now let's look at receive_Msg():
void receive_Msg(RxPacket *rx_data)
{
PORTD ^= 0x80; // TEST LED
cli(); // Disable Interrupts
int count, len;
char temp, checksum;
while ((UCSR0A & (1 << RXC0)) == 0) {}; // Do nothing until data have been received and is ready to be read from UDR
temp = UDR0; //next incoming byte is the MSB of the data size
while ((UCSR0A & (1 << RXC0)) == 0) {}; // Do nothing until data have been received and is ready to be read from UDR
rx_data->len = (temp << 8) | UDR0; //merge LSB and MSB to obtain data length
while ((UCSR0A & (1 << RXC0)) == 0) {}; // Do nothing until data have been received and is ready to be read from UDR
rx_data->api_identifier = UDR0;
switch(rx_data->api_identifier) // Select proper sequence for receiving various packet types
{
case 0x90: // Zigbee Receive Packet
for(count = 1; count < rx_data->len; count++)
{
while ((UCSR0A & (1 << RXC0)) == 0) {}; // Do nothing until data have been received and is ready to be read from UDR
if(count == 1)
rx_data->DH = (UDR0 << 24);
else if(count == 2)
rx_data->DH |= (UDR0 << 16);
else if(count == 3)
rx_data->DH |= (UDR0 << 8);
else if(count == 4)
rx_data->DH |= UDR0;
else if(count == 5)
rx_data->DL = (UDR0 << 24);
else if(count == 6)
rx_data->DL |= (UDR0 << 16);
else if(count == 7)
rx_data->DL |= (UDR0 << 8);
else if(count == 8)
rx_data->DL |= UDR0;
else if(count == 9)
rx_data->source_addr_16bit = (UDR0 << 8);
else if(count == 10)
rx_data->source_addr_16bit = UDR0;
else if(count == 11)
rx_data->options = UDR0;
else
rx_data->data[count - 12] = UDR0;
}
while ((UCSR0A & (1 << RXC0)) == 0) {}; // Do nothing until data have been received and is ready to be read from UDR
rx_data->checksum = UDR0; //store checksum
break;
default:
break;
}
sei(); //enable interrupts
PORTD ^= 0x80; // TEST LED
}
This routine reads in an API packet byte by byte and inserts them into the RxPacket structure. The comments should explain what's going on here pretty much, and if there's any further confusion take a look at the packet structure again in the Xbee documentation. The one thing to point out that looks weird right now is the switch statement. Currently, there is only one valid condition for it, which is case 0x90. This is the api identifier for Zigbee Rx packets, which are the only ones we are using currently. Eventually this code should be adapted to include various other types of packets.
And that's pretty much the bare minimum for passing messages using Xbee series 2 api mode. I would also like to note that the only setting that I had to modify to get this code to consistently work was setting JV to 1. If you don't, the module will not scan through different channels looking for a coordinator to join. It sucks and your Xbee will just give up joining a network and sit there otherwise. I'm not sure why that isn't enabled by default, but be sure to set that using X-CTU or your own AT parameter setup function.
No comments:
Post a Comment