Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have access to them a much more informative datapoint would be `openssl speed rsa` or `lzbench`. Sysbench is just a stunt it doesn't indicate much at all.


Sysbench is a better synthetic test in my opinion because it does a lot of varied grinding instead of trying to evaluate the entire CPU using just one very narrow and specific task. Even on the same Linux distribution there can be a lot of differences between x86 and Arm builds because OpenSSL cannot be entirely orthogonal in its use of assembly optimizations and CPU crypto acceleration, even for the RSA benchmark.

To offer a somewhat more varied example I've tested the POV-Ray benchmarker on Debian 11 on Oracle's Ampere Altra servers ("A1.Flex") versus an identical setup/build running on a 2 GHz EPYC 7281-based x86-64 VPS, and on that single-threaded test the Arm VPS handily outpaced the EPYC with almost 2x the performance.


`sysbench cpu` basically measures whether a single primitive inside sysbench is correctly optimized for your platform which is almost meaningless. Its run-to-run variation is enormous because there is a lot riding on whether particular data structure is optimally placed and aligned, which sysbench makes no effort to control. On a typical hyperthreaded x86 machine you will get 100% variance or worse depending on whether sysbench's 2 threads are placed on the same core or on different cores, so you must control that with `taskset` if you want the result to mean anything.

On my local machine with 4 threads I get ~10k events per second on cores 0+2+4+6, but on cores 8-11 I get ~13k. Does this mean that Gracemont Atom is 30% faster than Golden Cove Core? No, it is only measuring the fact that the efficiency cores happen to share an L2 cache.


Here's a lzbench

dd if=/dev/urandom of=1GB.bin bs=64M count=16 iflag=fullblock

Intel:

Compressor Compress. Decompress. Compr. size Ratio Filename

memcpy 5062 MB/s 5013 MB/s 1073741824 100.00 1GB.bin

zstd 1.5.2 -2 2083 MB/s 4743 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -5 210 MB/s 4775 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -9 85.5 MB/s 4774 MB/s 1073766410 100.00 1GB.bin

Arm:

Compressor Compress. Decompress. Compr. size Ratio Filename

memcpy 10876 MB/s 10950 MB/s 1073741824 100.00 1GB.bin

zstd 1.5.2 -2 3175 MB/s 11168 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -5 192 MB/s 10967 MB/s 1073766410 100.00 1GB.bin

zstd 1.5.2 -9 146 MB/s 10909 MB/s 1073766410 100.00 1GB.bin


Looks like the ARM one has twice the memory bandwidth, which would help with a lot of workloads.


It's more along the lines of having roughly the same total bandwidth but being able to exploit all of it from a single core, whereas on Intel you need to exercise all or at least several cores to drive the memory to the limits.


Nice. Showing off Neoverse's superior single-core load/store abilities.


Isn't that just measuring hardware support for RSA? Are many systems bottlenecked on RSA perf?


New Hetzner ARM: sign verify sign/s verify/s

rsa 512 bits 0.000070s 0.000006s 14268.2 159885.6

rsa 1024 bits 0.000405s 0.000021s 2468.4 47757.7

rsa 2048 bits 0.002847s 0.000078s 351.3 12830.1

Hetzner x86: sign verify sign/s verify/s

rsa 512 bits 0.000067s 0.000004s 14893.9 240902.8

rsa 1024 bits 0.000127s 0.000009s 7845.9 114300.9

rsa 2048 bits 0.000874s 0.000027s 1144.6 36800.6


If it's of any value to anyone, perhaps as an indication of hardware/environment implementation, I get identical numbers (<1% difference) on Oracle's Ampere Altra servers with OpenSSL 1.1.1n on Debian 11.


Due to a peculiar design issue, Neoverse N1 underperforms when using RSA. It doesn't affect other use cases and is fixed on newer Arm cores.


it would be very helpful to have this benchmark for the same price-point instances as well. e.g. it seems like the 2vCPU + 4GB ARM instance is 4.52eur while the 1CPU + 2GB x86 instance is 4.51eur, therefore it would make sense to do a comparison on this level as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: