|
|
C programs must often represent a set of binary conditions, or ``flags.'' When they do this using an int and a set of bitmasks, the code looks something like this:
int flags = 0;
int i,
/* set the ith bit */
int mask = 1 << i;
flags |= mask;
/* test the ith bit */
if (flags & mask) { ... }
This is fairly clear,
but what if there are more flags than bits in an int?
The program
becomes messier (and possibly less portable):
int flags[100];
int i;
int intsize = 8*sizeof(int);
for (int k=0;k<100;k++) {
flags[k]=0;
}
/* set the ith bit */
flags[i/intsize] |= (1 << (i % intsize));
/* test the ith bit */
if (flags[i/intsize] & (1 << (i % intsize))) { ...}
Some of the messiness can be hidden using macros, but the increased complexity in macro semantics can actually make the program trickier to understand and debug.
#define INTSIZE (sizeof(int)*8)
#define WORD(i) ((int)((i)/INTSIZE))
#define MASK(i) (1 << ((i) % INTSIZE))
#define BIT(i) ((flags[WORD(i)] & MASK(i))!=0)
#define SET(i) ((flags[WORD(i)] |= MASK(i)))
#define RESET(i) ((flags[WORD(i)] &= ~MASK(i)))
int flags[100];
int i;
/* set the ith bit */
SET(i);
/* test the ith bit */
if (BIT(i)) { ...}
The situation worsens if the size of the bitstring can't be predicted in advance, since the flags array must then be allocated dynamically.
Using the Bits component, the solution is straightforward regardless of the size of the bitstring:
#include <Bits.h>
Bits flags(0,100); // 100 zero bits initially
int i;
flags.set(i);
if (flags[i]) { ... }
It's easy to increase the size
of a bitstring
flags.size(200); // extend the bit string by
// adding 100 bits to the
// high-order end of the
// bit string
and it's easy to
do things that can't be done with macros -
like concatenating two bitstrings and
performing logical operations:
Bits b1(0xf0,8); // 8 bits with value 0xf0
Bits b2(0x0f,8); // 8 bits with value 0x0f
Bits b3 = concat(b1,b2); // 16 bits with value 0xf00f
Bits b4 = b1 ^ b2; // exclusive 'or' (8 bits
// with value 0xff)