A work colleague (AB) discovered a little trick with pg profiling that everyone should know about if they don't already,
Profiling is often avoided in live apps because it is expensive and slows down apps that are already running slow (why else profile?)
But there are two things going on when profiling. If you COMPILE with a -pg, every function that gets called stores information about itself. This info is used to determine how many times particular functions are called, and which function gets called by which function. Unfortunately this is expensive for your application, especially for properly written c++ apps as functions are called lots and lots.
When you LINK with a -pg, the program will sample your application 100 times a second and record which function is active at that point in time. This information is used to estimate what proportion of time your app spends in each function. This is often the most sought after information when profiling and is actually very cheap, computationally.
To come to these conclusions I wrote a test application that had a handful of functions that get called a lot, one of them was a recursive function. My results with different profiling methods were as follows:
(1) No profiling: application took 13 seconds to execute.
(2) Linked with -pg: application took 13 seconds to execute.
(3) Compiled and Linked with -pg: application took 59 seconds to execute.
When linking with -pg the call graph was absent from the profiling results, but the percentage of time spent in each function was retained with accurate results.
All of the interesting technological, artistic or just plain fun subjects I'd investigate if I had an infinite number of lifetimes. In other words, a dumping ground...
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment