The common pattern in code for using cache systems looks like the following. In this example get_from_cache() returns zero when the data was not in the cache.
int lookup(int key)
{
int data;
data = get_from_cache(key);
if (data == 0)
{
data = get_from_backing_store(key);
set_in_cache(key, data);
}
return data;
}
Let's estimate the performance gain from having this cache. Assume that get_from_cache() takes $1$ ms, while all the work that must be done during a cache miss (get data from the backing store, then set in cache) takes $100$ ms. When the cache hit ratio is $95 \%$, what is the average lookup time?
- $1 \: ms$
- $6 \: ms$
- $30 \: ms$
- $95 \: ms$