Hi,
I had run a benchmark test and I have problems interpreting it because I have no idea what the different values say.
This is one row of my output:
Quote:
Timings for 2048K FFT length (6 cores, 1 worker): 2.42 ms. Throughput: 413.85 iter/sec.
Timings for 2048K FFT length (6 cores, 6 workers): 18.79, 19.54, 18.55, 18.68, 18.50, 18.08 ms. Throughput: 321.19 iter/sec.
Timings for 2048K FFT length (6 cores hyperthreaded, 1 worker): 3.07 ms. Throughput: 325.58 iter/sec.
Timings for 2048K FFT length (6 cores hyperthreaded, 6 workers): 32.95, 19.79, 20.43, 17.03, 23.78, 17.67 ms. Throughput: 287.22 iter/sec.
|
Why does the throughput decrease as the FFT length increases?
What do the milliseconds mean?