Hope you don't mind if I point out a couple of small bugs in babble.c:
1. When read_word() reads the last word in a string, at line 146 it will read past the end (and into uninitialised memory, or the leftovers of previous longer strings), because you have already added 1 to len on line 140 to skip past the character that delimited the word. Undefined behaviour.
2. grow_chain() doesn't assign to (*chain)->capacity, so it winds up calling realloc() every time, unnecessarily. This probably isn't a big deal, because probably realloc() allocates in larger chunks and takes a fast no-op path when it determines it doesn't need to reallocate and copy.
3. Not a bug, but your index precomputation on lines 184-200 could be much more efficient. Currently it takes O(n^2 * MAX_LEAF) time, but it could be improved to linear time if you (a) did most of this computation once in the original Python extractor and (b) stored things better. Specifically, you could store and work with just the numeric indices, "translating" them to strings only at the last possible moment, before writing the word out. Translating index i to word i can be done very efficiently with 2 data structures:
You can store the variable-length list of possible next words for each word in a similar way, with a large buffer of integers and an array of offsets into it:
unsigned next_words[MAX_WORDS * MAX_LEAF]; // Each element is a word index
unsigned next_words_start_pos[MAX_WORDS + 1]; // Each element is an offset into next_words
Now the indices of all words that could follow word i are enumerated by:
for (j = next_words_start_pos[i]; j < next_words_start_pos[i + 1]; ++j) {
// Do something with next_words[j]
}
(Note that you don't actually store the "current word" in this data structure at all -- it's the index i into next_words_start_pos, which you already know!)
1. When read_word() reads the last word in a string, at line 146 it will read past the end (and into uninitialised memory, or the leftovers of previous longer strings), because you have already added 1 to len on line 140 to skip past the character that delimited the word. Undefined behaviour.
2. grow_chain() doesn't assign to (*chain)->capacity, so it winds up calling realloc() every time, unnecessarily. This probably isn't a big deal, because probably realloc() allocates in larger chunks and takes a fast no-op path when it determines it doesn't need to reallocate and copy.
3. Not a bug, but your index precomputation on lines 184-200 could be much more efficient. Currently it takes O(n^2 * MAX_LEAF) time, but it could be improved to linear time if you (a) did most of this computation once in the original Python extractor and (b) stored things better. Specifically, you could store and work with just the numeric indices, "translating" them to strings only at the last possible moment, before writing the word out. Translating index i to word i can be done very efficiently with 2 data structures:
(Of course you could dynamically allocate them instead -- the static sizes just give the flavour.)word_data stores all words concatenated together without delimiters; start_pos stores offsets into this buffer. To extract word i to dest:
You can store the variable-length list of possible next words for each word in a similar way, with a large buffer of integers and an array of offsets into it: Now the indices of all words that could follow word i are enumerated by: (Note that you don't actually store the "current word" in this data structure at all -- it's the index i into next_words_start_pos, which you already know!)