I suppose the only adequate way to find out is to run speed tests, but that'll be somewhat complicated with the 1st query, since we'd want to profile not the query itself, but how its results are handled.
My personal experience is that the more rows there are and the larger they are, the more beneficial using the cahce functions is (I remember running tests on house loading myself and I remember seeing results by JernejL before). The amount of operations related to strings is very large with mysql_fetch_row and sscanf being used together, I assume this is partially the reason for it being slower.
You could however test the speed of the 2nd and 3rd methods - see how significant the difference between cache_get_field_content and cache_get_row is. Since you select just 2 fields and 10 rows, it should come down to 30 string comparisons (the processor might be smart here to optimize something out, who knows). The testing would involve just putting it in a loop (and losing extra code such as printing - it will make the results portray something else).
For the second code, I have an optimization tip if you're using a version never than R7 of the BlueG's plugin or perhaps have included the cache_get_row_int/cache_get_row_float functions that I posted about a year or so ago: use the mentioned functions.
Current method handles the cache entry as a string and passes it around to your PAWN script as a string, where it is evaluated in strval. cache_get_row_int however skips the string part
This does not make much difference as we're already operating on a platform that's slow by nature and the difference between the codes is not _that_ big, but speaking from a personal point of view, I always feel a bit better when I know I've gone over my code and made it even a bit better.