One approach is to calculate the execution time for a typical program, having, say, I machinelevel instructions:
$$
\begin{aligned}
\text { Time }_A & =\text { InstructionCount } \times \text { CPI }_A \times \text { CycleTime }_A \\
& =I \times 1.2 \times \frac{1}{2.4 \times 10^9}=\frac{1}{2} I \times 10^{-9} \\
\text { Time }_B & =\text { InstructionCount } \times \text { CPI }_B \times \text { CycleTime }_B \\
& =I \times 2.0 \times \frac{1}{3.0 \times 10^9}=\frac{2}{3} I \times 10^{-9}
\end{aligned}
$$
So, a typical program will execute (slightly) more quickly on Machine A. More precisely, the relative performance would be:
$$
\frac{\text { Time }_B}{\text { Time }_A}=\frac{\frac{2}{3} I \times 10^{-9}}{\frac{1}{2} I \times 10^{-9}}=\frac{4}{3}
$$
So, Machine $A$ is $4 / 3$ times as fast as Machine $B.$