The most important factor in getting good performance is understanding
the run-time efficiency of the algorithms you are calling (and writing).
This document makes an effort to give the run-time efficiency for
various functions but you are encouraged to take a look at
and get an understanding of things yourself. This is a deep subject in
that many algorithms performs well on smaller sets of data, but not
larger, and vise-versa.
As an example, the
lstrip() command removes whitespace at the
beginning of a string. Here is an example that uses
lstrip() to check
// as the first non-whitespace at the start of a string:
lstrip(x); bool comment = x.startsWith("//");
Due to string structure, this will involve doing a
string contents to remove the whitespace gap. On a larger string, this
could be slow.
There are a number of ways to do get the same answer as the code above
while avoiding the
memmove. For example, you could use the
(const char*) conversion function and do the check yourself:
const char* cstr = (const char*)x; while ((*cstr == ' ') || (*cstr == '\t')) ++cstr; bool comment = (*cstr == '/') && (*(cstr+1) == '/');
Now the routine will probably run faster if there is a large amount of
text behind the first non-whitespace character. Things are not so
simple though. For starters, the code took a large readability hit.
Even if I took the time to comment the above code well, its still
tougher to read. Another problem is that the chances of having a bug in
the second code is higher. Finally, if we had more whitespace
characters in our set than just space or tab, the simple
testing line would start eating into the performance. The
lstrip() function handles the same situation much better (look at
the code if you are curious as to how).