Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD Launches Ultra-Low-Power Ryzen Embedded APUs (anandtech.com)
373 points by 1900jwatson on Feb 28, 2020 | hide | past | favorite | 144 comments


I work in (way smaller) embedded development, so I was curious about the low-level I/O capabilities. If you say "this SoC is for embedded", I am going to wonder about the SPIs, I2Cs, UARTs, and GPIOs.

I surfed up the "product brief" [1] which states:

• Up to 4x USB 3.1 (10Gb/s) / 2x Type-C® with ALT. DP power delivery capable

• 1x USB 3.1 (5Gb/s)

• 1x USB 2.0

• Up to 2x SATA ports

• NVMe support

• eMMC5.0, SD3, or LPC

• Up to 16L of PCIe® Gen3 (8 lane GFX, 8 lane GPP) and 7 link max

• 2x 10 Gigabit Ethernet

• 2x UART, 4x I2C, 2x SMBus, SPI/eSPI, I2S/HDA/SW, GPIO

So ... that's pretty good, then. No clear number on the GPIOs but I guess it will be at least "a handful" since these are no small packages.

[1] https://www.amd.com/system/files/documents/v1000-family-prod...


My guess is the GPIO is similar to the 16 bits of the older G series SoC. I think it's more for led indicators, chassis intrusion, and the like. But the SPI and I2C make that kinda moot with all the GPIO and PWM or interface to a micro/cpld/fpga. In the end it's still more PC than microcontroller.

Side note: I've yet to see anyone using AMD's 10GbE ports nor any driver support in any OS for any existing AMD SoC board. Anyone know what's up with that?


I also found it odd that nobody ever talks about the NICs. I wonder if they're buggy or deficient in some way. STH said "We were able to pass 10Gbps of traffic through the NIC" so there is some software support. https://www.servethehome.com/amd-epyc-3251-benchmarks-and-re...


I wonder when we'll get a SBC with it.


The Udoo with Ryzen has been around for 2 years now. https://www.udoo.org/udoo-bolt-the-amd-ryzen-based-maker-boa...


It's not with this ultra low power embedded APU though.


"Ryzen-pi" sounds like fun!


Or Raspberry Rye.


In Canadian, that sounds like a cursed beverage.


Ryzeberry Pi


Ryzenberry has even better ring to it


Ryzberry Pi sounds even better IMHO


To me this (and the other ones) just sound too similar to Raspberry. You shouldn’t choose a name that could sound like a typo of something else.

Ryzenberry adds a sylable, so there is no ambiguity and the hard Z makes it sound more powerful, which it is.

The only issue I can foresee is you might need AMD’s permission for the name, because it includes protected trademark.


...said scooby doo?


plenty in pre production phase, including scam Atari VCS (R1606G)


Hopefully this means passively-cooled SoC Mini-ITX boards that don't cost an arm and a leg. I've been speccing out a new home NAS and had hopes for something along those lines, but both the SuperMicro (A2SDi-4C-HLN4F, ~$350) and ASRock (EPYC3251D4I-2T, ~$880) options I found struck me as overpriced.

For comparison, when I did this five years ago, I was able to get an Athlon 5250+motherboard for $137. It's a little surprising that I can't seem to do any better after all this time.


I've replaced the case of my NUC with a passively cooled one from Akasa. I forgot what I paid for it, seems the one I had sold for 50 EUR (old NUC I bought around 2015/2016). The NUC itself was 150 EUR or so. The highest temperature I can get it to is 50 Celsius or so. It was really easy to switch the case, though you did need external antenna's for that (bought them from Aliexpress).

I really want to switch it over to something equivalent from AMD. Unfortunately it seems that equivalent silent boards are mostly intended for specialized businesses. Meaning, the prices are high.

Even with a newer NUC I'd probably still pay something like 150 EUR for the NUC and 50 EUR for a passively cooled Akasa case. All the AMD options I've found are way more expensive than this unfortunately. Sometimes I cannot even find a seller for some of the options.

Edit: I'd prefer if the case is small. NUC is already small. It could be a bit bigger than a NUC but ideally not.


What would be a proper way to attach something like 4 x 3.5" SATA HDD to NUC form-factor?


I actually did just this.

There are two main ways I considered:

1. Buy a 4-bay USB enclosure.

2. Put a HBA in a Thunderbolt 3 -> PCIe enclosure, put the SATA disks in a standard PC case with a SAS port expander, connect the SATA disks to the port expander with A SATA breakout cable and connect the newly created DAS to the HBA with a standard SAS cable.

I ended up going the first way. I bought two of them that have USB 3.1 Gen 2 UASP interfaces and filled them with shucked 12TB disks. They're attached to the NUC using regular A->C USB 3.1 Gen 2 cables while the Thunderbolt port is connected to a Thunderbolt -> 10GbE adapter. It works surprisingly well as the backup server it's intended to be, the main issue being that under very high I/O loads, the USB interfaces completely die for some reason I've been unable to ascertain and can't be recovered without rebooting.

I believe the second option would have been much more stable but also bulkier and more expensive.

There's also a third option: install an M2 to PCIe riser in the NUC and connect a HBA to the NUC directly. using a process like this [0]. This is a bit too frankenstein for me.

However I want to emphasise very strongly that I don't recommend doing this. There are tons of much better options around. HP's MicroServer line in particular is definitely worth thinking about but also you can stack ODROID HC2s if you're interested in Ceph or GlusterFS and there are plenty of small PC cases that can hold 4 3.5" drives and a motherboard.

[0]: https://ganon.club/index.php/2019/04/04/getting-10gbps-on-a-...


> there are plenty of small PC cases that can hold 4 3.5" drives and a motherboard.

Can you recommend one? In a quick newegg search [1], I see the Fractal Design Node 304 for $90 + whatever a PSU costs. Anything cheaper? I also see a 3-bay COOLER MASTER Elite 110 for $50.

I've been writing NVR software [2] and am trying to put together hardware recommendations for a small, inexpensive machine.

* The ultra low budget option is a Raspberry Pi + a 2-bay USB SATA dock. It works surprisingly well most of the time, but cheap USB-attached storage is a little sketchy. I've had corruption a couple times and suspect the dock.

* I'd like something a step or two up that has the drives in the main case. The Helios64 looks attractive but I'd like to see what comparable x86_64-based options there are. The MicroServer you mentioned looks interesting also - not as pretty as the Helios64, not hot-swappable, a little pricier, but a lot more powerful.

edit: this page [3] lists some nice cases also. More bays than I'm looking for but nice anyway.

[1] https://www.newegg.com/p/pl?N=100007583%20600030561%20600030...

[2] https://github.com/scottlamb/moonfire-nvr

[3] https://butterwhat.com/2019/06/16/brians-top-three-diy-nas-c...


The "canonical" options are the Fractal Design Node 304 and the Silverstone DS380B. If you're willing to go bigger and want more drives, take a look at this thread [0].

[0]: https://forums.serverbuilds.net/t/guide-nas-killer-4-0-fast-...


I've had a Fractal Design Node 304 for the last five years and it's been terrific. I put a Corsair PSU in it that only turns on its fan under load, so it runs silently just about all the time. I kept it under my TV for a while and it was completely inaudible while looking like it belonged there, and then after I moved it went into a corner of the bedroom (for wired access to the router) where it's been similarly unnoticeable.


Hmm, I'd try something like http://www.iocrest.com/en/product_details594.html + eSATA to external enclosure (probably custom 3d printed).

These adapters are based on Marvell chips from what I remember and are quite reliable but tend to get warm.

https://www.marvell.com/content/dam/marvell/en/public-collat...


Thanks! I'm using HP Microserver right now, but for various reasons I'm thinking to get something else. They released 10 Plus version, so I'm comparing alternatives. Your second variant definitely sounds very interesting as it should be possible to upgrade NUC independently. I wonder if USB or Thunderbolt connections are considered reliable for 24/7 use.


> I wonder if USB or Thunderbolt connections are considered reliable for 24/7 use.

I think Thunderbolt should be fine since it's essentially PCIe. USB so far has been okay as long as you don't push it too hard. When the throughput gets too high it dies.


Have you considered the Helios64? Modest CPU (6-core ARM: 2 Cortex-A72s, 4 Cortex-A53s) but a nice enclosure, a 2.5 Gbps port + 1 Gbps port, and an excellent price: only $285 for everything but the drives.


In that price range, I'd far rather get an Udoo bolt V3 for $330. Dual Ryzen cores (4 threads, 3.2GHz) and way better IO along with the option for way more RAM. Case and charger add another $45, but total performance is way higher. The quad core (8 thread, 3.6GHz) version with an 8CU GPU runs about $420.

Going from $300 to $380 or $500 with a similar form factor and many times more performance seems like a good deal for a lot of tasks.


Very nice find, thanks, I checked it can do 32GB ECC - however, for mass storage it only seems to have 1x NVME, while some quick mental math tells me there should be enough lanes for 2x NVME or Pci-e X4 Gen 3

Do you know something similar to the Udoo that would have at least 2x NVME + 1x SATA 2.5?

From the Udoo specs: > 32gb Emmc 5.0 High Speed Drive > Ssd Sata Module Slot M.2 Socket Key B 2260 (Featured Also Pci-e X2) > Nvme Module Slot M.2 Socket Key M 2280 (Pci-e X4 Gen 3 Interface) > Sata 3.0 6 Gbit/s Standard Connector


There are "commercial" boards, but they price by appointment only and probably cost 3-10x as much.

Here's an IO comparison

Udoo Bolt

* 1 sata 3 port

* 1 m.2 type B (2x PCIe 2 lanes)

* 1 NVME drive (over 4x PCIe 3 lanes)

* 1 eMMC 5 (32gb soldered -- a mistake IMO, but one could potentially solder on a different chip)

Rockchip rk3399 (raw specs)

* 1 eMMC 5.1

* 2 SD/MMC 4.5

* 4x PCIe 2.1 lanes

The Helios64 is stretching all their SATA across those 4x PCIe 2.1 lanes. The single m.2 type B on the udoo has that much bandwidth by itself (and the NVME has the equivalent of 8x additional PCIe 2 lanes). If you really wanted, you could probably find an adapter that could turn the NVME into PCIe and then into quite a few SATA lanes as that's basically all they're doing anyway with the helios.


Thanks a lot! For what I do, I'd need to add a key-B NVME drive (not SATA, because of the IOPS) to the "normal" NVME if I understand correctly.

Do you know any such drive having at least 512G and working in NVME (not SATA)? Or can I just plug a 2260 B+M keyed drive and hope it doesn't default to SATA?


Their docs claim either NVME or SATA work.

https://www.udoo.org/docs-bolt/Hardware_References/M.2_Conne...


ink_13 was talking about NASs, so I recommended a machine for that use case. Are you talking about a NAS as well? If so, what comparable enclosure can you find for $45? You need more SATA ports also—an expansion card or port multiplier or some such.

The Udoo is a much more powerful little machine, but the Helios64 seems more practical as a low-budget NAS.


> SoC Mini-ITX boards that don't cost an arm and a leg

It really depends on your requirements. If you can generalize "Mini-ITX board" to "Small Form Factor x86", then the ODROID H2 fits the bill for less than 200$ (2x1Gbps ethernet, 2xSATA, 1xM.2), unless you require ECC (which admittedly, is an interesting offer of the Ryzen boards of the article).

There are certainly other ARM boards meeting NAS requiremens in a way or another, but I personally prefer, for simplicity (due to compatibility), x86.


Gotta second the ODroid H2 recommendation... Much more cost effective then then NUC/NUC-clones. Also awesome for small home lab - kubernetes clusters, etc.


It most probably will cost an arm and a leg since it's not designed to be a Raspberry PI competitor but a device that will be used in MRI scanners, large digital signage displays, touchscreen industrial controllers, ATMs and other expensive low volume hardware.


Why would you use such a ultra low power SOC in a MRI scanner?


And noise is really not a problem, not just power I guess.


Yeah.... I really wish AMD packaged these for their AM2 socket so we could use commodity boards for them. Sure at that TDP your run of the mill mainboard is a significant power draw, but the other all system is still less than the 35w CPU I ended up with.

(I just finally replaced my core2duo with a low power AMD because the price was so wildly better than the intel open and I could actually order the AMD one from retailers rather than shady eBay second hand cpus)


That was exactly my thought -- my desktop is currently a Ryzen 3 APU anyway, and I'd love to go with a cooler-and-quieter option, as long as it's socketed so I can drop in a new chip in a few years as my needs change. (Also hello from the D!)


What did you end up with?


You can have a quad-core AMD board with dual (or triple) gigabit for a hundred euro https://www.pcengines.ch/apu2.htm


I use one of these (running OpenBSD) as a router. 100% uptime, excluding OS upgrades and power failures.

It’s fast enough to run small VMs, and is passively cooled.

PC Engines has a nice business model. They put out board revisions infrequently, but then sell them unchanged (other than errata) for decades. This lets OS vendors provide rock solid support over time, and also makes them appropriate for use cases where you want to be able to buy an identical replacement years from now.

As Moore’s Law ends, I hope more hardware vendors go this way.

Hopefully PC Engines will issue a new Ryzen based APU line sometime soon (since the article says these will be produced by AMD for 10 years).

(Edit: they support boards for about one decade. Not multiple decades)


I've been actually wondering what they might use if they want to keep 4 cores. V1500B looks nice but it is rated 12-25W.

https://www.amd.com/en/product/8496


These are GX-412TC Jaguar CPUs, they are very old. Introduction date: June 6, 2014

http://www.cpu-world.com/CPUs/Puma/AMD-G-Series%20GX-412TC.h...


They are still dual-issue out-of-order x86-64 cores. Same architecture as in Xbox One. Plenty of power for the TDP.


I have one as my home router and got an additional two to run as GlusterFS nodes and couldn't be happier (well, Gluster performance is really unsatisfactory with ~50MB/s from these two running in replicated on 1 LUKS-encrypted SSD each, but I've realized now that probably I shouldn't expect much more with this kind of setup and you really need to run distributed on larger number of smaller nodes to make Gluster performant, so can't blame the APUs).

Now I'm having a hard time to decide what I should go for for additional worker nodes in my cluster - go all-in on PCEngines APUs, Odroid H2, Udoo Bolt or a larger number of some incarnation of RK3399 SBC (Khadas VIM3/NanoPi M4/Rock Pi) or even Raspberry Pi4s - or if it's better to wait a bit for some of these new AMD SFFs.

If power-consumption, space and noise wouldn't be a factor it'd be a no-brainer to get two or more 19" rack servers, but that's a no-go.

I have a feeling that the playing field for small-form-factor, power-efficient and quiet compute clusters is going to change dramatically soon with new products based on AMD and ARM, but also not sure if it's worth waiting if I want things up and running by this summer.


What sort of requirements do you have? Maybe something like a HP Microserver would work for you?

https://www.servethehome.com/hpe-proliant-microserver-gen10-...


Yes, even if they're a tad bit on the large side of what I'm looking for they're still looking quite compelling - unfortunately relevant models are not on sale in my region and the pricing becomes less attractive after shipping and import duties.


I already went far, far down that rabbit-hole for a max capacity 3.5" HDD + ZIL SSDs + L2ARC SSDs FreeNas 4U 10 GbE NAS on Supermicro/ASRock Rack DIY and Dell, HP, Lenovo and iXSystems. For home use, you're better off buying a used 2U-4U Supermicro or Dell box with an old Xeon or two. They're so cheap, buying new is like throwing money away on a new car with instant depreciation. In the Bay Area, you can even order up and swing by UNIX Surplus for some deals, although sometimes they're higher than the usual sources.

I was going to get some of the WD HGST HC530 14 TB self-encrypting drives until they came out to ~$600 each. (Ouch.) The regular ones are <$400, similar to the ones BackBlaze uses.


> For home use, you're better off buying a used 2U-4U Supermicro or Dell box with an old Xeon or two.

But these are not passive-cooled at all and are very, very, VERY noisy in a home environment.


And also the electricity cost over the lifetime (say 10 years 24/7) is quite substantial. It makes no difference for a powerful server but for a small low power thing it can be quite a chunk of the total cost.


I was considering getting one, but it's a much louder, larger and hotter running equipment than I want in my house.

For my purposes it's much better to keep a dedicated Synology Nas for my data, and a couple low-power NUCs / compact pcs as a kubernetes cluster for actual hosting / experiments.


Uhm, those things sounds like a damn jet engine. The post above asked for a passively cooled part.


I've been speccing something like this recently and you will definitely get a nice server for not much money. However the CPU performance may not be great compared to specifying a new low cost Ryzen with ECC Ram. And the power consumption will probably not be great either. Ryzen has made some great home server configurations quite affordable with its great performance, low cost, and ECC support.


>For comparison, when I did this five years ago,....

That is because Intel 14nm was first launched in 2014/2015. And we haven't had any node improvement since then. 5 years later 10nm is barely out. Without competition we have a stagnant improvement.


I was at embedded world this week, and Asrock, Gigabytes, MSI were all showing rack mounted servers with consumer grade Ryzens.

Just by looking at revision numbers at their boards, it seems clear that they already went through quite a number of iterations, just as usually happens with more proper server mobos.

There must be a demand for them.


Historically I've always gone with Intel, but I've been wondering if new AMD CPUs would work well hypervisor hosts. A Ryzen Threadripper 3990x for example has 64 cores, 128 threads, and supports ECC RAM. Intel by comparison doesn't offer anything close to those numbers on a single CPU. So, does anyone have any thoughts as to why using the 3990x or some other similar AMD product wouldn't be a great idea compared to using some Intel solution? Is the, say, dual Xeon approach better in some way I don't know? I can't tell if Intel is actually better these days or if they just benefit from all the contracts they've been able to get with major manufacturers over time.


Something to keep in mind if you use it, VMware just changed their licensing for CPUs over 32 cores require an additional ESXi license. So that 64 core processor now requires 2 licenses instead of one!


Wow, I had no idea. Super useful to know. Just seems like a money grab from VMware.


MS has done the same thing, they all saw the writing on the wall a few years ago.

MS licensing is complicated, but you must buy a minimum of 16 cores, then you get license packs for additional cores.


My grey market Windows Server 2019 Standard (with Hyper-V) key happily activated on my 48-core system. What's the proper way to buy a license?


I can tell you why I just choose a dual Xeon workstation over a thread ripper, despite it being considerably more expensive for lower performance: memory. The threadripper motherboards apparently max out at 256 GB of ram. I put 1 TB on the Xeon machine and could have gone higher if I needed it.

It is really too bad, but 64 cores with a max of 256 GB is super unbalanced.


Step up to Epyc processors if you need memory, which is the competitor to the Xeon. Threadripper is for enthusiasts, not specifically for servers.


Thanks, I'm much less familiar with AMD product lines and didn't know about EPYC.


EPYC supports up to 4TB of RAM. Here's Linus having fun with one back in September. https://www.youtube.com/watch?v=HuLsrr79-Pw


I clicked that link expecting Linus Torvalds and was very confused. The mononym "Linus" is already taken :-) It would be very exciting to see Linus Torvalds playing with giant amounts of RAM but sadly he doesn't seem to make a lot of videos like that.


If you're talking operating systems, it's Linus Torvalds. If you're talking tech videos, it's Linus Sebastian. Linus Media Group gets a lot of the cool hardware first and often makes a point of getting some unofficially to mess with it in ways they're not supposed to.


Ryzen 3rd gen max memory is 128GB, which matches a realistic requirement for doing a cookie cutter cloud hosting.

A wicked fast quad core instance with 32 gigs of mem is a top tier offering as far as cloud hosting goes.

And as for cost/perf goes, Ryzen U1s can easily yield more money than a mid-tier Xeon offering, especially if you are buying twins or quad systems (2 or 4 independent systems in a single U1 enclosure)


Threadrippers are not really 'workstation' CPUs, they are HEDT CPUs. More Apples-to-Apples would be to compare to EPYC CPUs.


Definitely an important factor, didn't check the max RAM. Kind of surprises me that AMD would put all this work into building a many core/thread CPU only to have a 256 GB RAM limit but if it's intended for gamers/workstations and not servers then I guess 256 GB would be plenty for that. But I agree it's unbalanced regardless.


That's the market -- EPYC is the server oriented product, and it supports a lot more RAM, though I think still less than Intel (I hand-wavily remember 4TB for EPYC, 6TB for Xeon, but with a huge price difference between them).


Given the design of threadripper, it's not really "all this work" to make a 64 core model, given the 32 core model. Just pile more of the things you can scale on the chip without worrying too much about the parts that don't scale. And the same happened with previous generation. 16 core 2950X turned into 2990WX, but with a weird crippled memory interface. At least 3990X is more balanced than that.


Threadripper is the prosumer chip, Epyc is the server chip. The current Epyc based on the 7nm Zen2 architecture can handle 4TB of ram.


I am looking into a Threadripper 3990x (64 cores) with Proxmox, which is free, rather than dealing with licensing with ESXi and all that.


Threadripper is limited to quad channel ram, so you might want EPYC or Xeon instead if that is an issue.


I was looking for adding at least one Threadripper server to our local HPC system. I only found one ASRock motherboard for TR4, but none yet for sTRX4. I hope we will see 1U rackmounted sTRX4 systems soon.

Our new HPC system will mostly consist of Epyc 7742, but having one node with super high single-thread performance would be nice for less well parallelized applications.


Who do you think is wanting them? Small-Medium business or enthusiasts/prosumers?

I can’t see enterprise vendors selling them and I doubt the support is great.


> I can’t see enterprise vendors selling them and I doubt the support is great.

Hosting companies don't end with enterprise cloud stuff. The availability of consumer grade hardware chips has always been much better than server ones for anybody, but tier 1 vendors.

Just before there was little incentive to chase that market for anything as even the best consumer grade CPUs still were not contenders on core count, ECC, IPMI support, and cache size to compete with Xeons.

But now, even very cheap 3rd gen Ryzens can easily beat mid-tier Xeons: more cores, more cache, ECC support, support for Aspeed remote management. And they are way more power efficient, so making a twin or even a quad system in U1 is possible.


Pfsense router or Server/NAS for home or small office


For reference, I use a trio of 3700 servers to run Sufficient Velocity -- a fairly large webforum.

That's already overkill. It's nice having the spare capacity, but it would be absolutely foolish to lease a full-blown EPYC trio.


Would you please post the model numbers? I am overdue to upgrade. Especially interested if the motherboards have multiple LAN ports. Thanks!


1U2LW and 1U4LW for asrock. This is what I googled out now. Maybe they were showing a model with different board.

They have quite a few:

https://www.asrockrack.com/general/products.asp#AMD

MSI guy was particularly secretive. MSI appears to actually have a separate sub-brand they don't market around to do server OEM AMD products without loosing Intel partner status.


There is definitely. Particularly for some applications requiring the highest possible frequency (Ryzens are available in higher frequencies than Epycs). Plus Ryzens support ECC so there are fine server CPUs.


Just a note, as I was recently slightly disappointed when shopping for RAM for a recent Threadripper build:

The current max capacity for Threadripper ECC RAM is 128GB. The largest unbuffered ECC DIMM is 16GB. With an Epyc (or a Xeon) you could used registered memory which is available in higher capacities.

If you went 3990X, you'd be limited to 1GB per thread or 2GB per core. This isn't a showstopper, but could be a bottleneck depending on how RAM-intensive your workload is.

Let me be clear, I'm still quite happy with my recent purchase and build. Threadripper has enabled unprecedented core counts at very low prices.


Along with that, I've not reliably been able to find fast unbuffered ECC ram either. I've seen some 2333MHz stuff but not in 16GB DIMMs and nothing faster actually for sale (I've seen 2400MHz advertised but not actually available). I've got 128GB of 2133MHz and it works fine but it'd be nice to be able to get the extra performance considering how much it matters for the Zen core cpus


Samsung 8gb M378A1K43CB2-CTD and 16gb M378A2K43DB1-CTD

I've only heard about these from buildzoid's overclocking video. https://www.reddit.com/r/overclocking/comments/exnrcz/buildz...


Those don't appear to be ECC capable unfortunately. https://memory.net/product/m378a2k43db1-ctd-samsung-1x-16gb-...


Crucial has 16GB unbuffered ECC at 2666MHz native frequency, available within the last month. It's a very stable overclock at 3000MHz. I think I've got room to move up to 3200. I would love to get to 3600MHz, but that has seemed unstable when I've tried.

CAS latency isn't the best, and tweaking this has resulted in a quite unstable system.

I'm currently running 128GB of this:

https://www.crucial.com/memory/ddr4/ct16g4wfd8266/ct16439854


Neat, that's definitely new and good to hear about. I'm eventually going to upgrade my 1950x and the rest of the system so getting to 3000 would make a huge difference already. I've managed to get my sticks to 2333 but they then report about 3 errors a week, and about 20 a day at 2666. The performance gains from that aren't enough to leave it like that and feel good about it to me.


mind posting your ECC Threadripper build?


I can post a link to a full PCPartPicker list later if you want, or answer specific questions.

It's a 3970X with 128GB (link to RAM in sibling thread) of 3000MHz RAM on a GIGABYTE Aorus Master. No processor overclock. Running Noctua's NH-U14S with one of their high-RPM fans. GPU is nothing special - the cheapest card I could find with DisplayPort 1.4+ and HDMI 2.0+ (for potential dual 4k@60Hz - currently dual 2560x1440@60Hz) - it's a Radeon 550. Boots from NVME and has 40TB raw (20TB usable) spinning disk.

It's a virtualization host and prototyping workstation. I work primarily in data analytics, and it's not uncommon to have prod servers larger than this. I've got enough headroom to realistically test things at scale.


I read about this the other day and still somewhat supprised it has two 10 GbE ports. I say supprised, and wonder if it has the horsepower to drive them at that speed. Thinking along the lines of a firewall type application.

Be very interesting in tests when they come out and what kinda thruput you can sustain.

[EDIT ADD] My surprise/excitement stems from the aspect that to get that kinda feature on mainstream motherboard, you're always looking at the higher end of the offerings.


We use the Marvell Armada 8k SOC (quad core ARMv8, clocked at 1.6GHz) in an embedded board. We can sustain 10Gbps TCP throughout to the CPU on that with around 60% CPU usage on a single core. Bear in mind that these are all large packets (1500 bytes), and it's much harder to achieve this will small packets.

I'd be pretty surprised if at the 1.5GHz part couldn't handle the same.


Out of interest, can I ask what you're using that kind of throughput for with a SOC?


Our company provides network measurement devices and services (typically used on the internet). We benchmark internet connections, effectively. With some internet connections exceeding 1Gbps these days, we need hardware capable of measuring above 1Gbps. Here's an article that describes our use of this 10Gbps SOC in a little more detail: https://samknows.com/blog/arris-and-virgin-media-test-new-10...


I'd be perfectly happy with the performance boost if it only managed 3 Gbit.

Now we also gave 2.5/5 GbE which might be sensible as well. Ethernet has been stuck at 1 GbE for far too long and have skewed everything a bit.


Even the AMD specs just says two 10GbE ports, but nothing about it being an AMD design, or some third party controllers that are just included in the package.

AMD has previously made their own network controller chips, but in last 10 - 15 years or more, so I would be a little surprised if they now have a 10Gb design. Maybe I just haven't paid enough attention to AMD.


Zen has built-in AMD 10G NICs (using the amd-xgbe driver) but they're usually disabled so nobody talks about them.


It has controller for 10GbE port, but still depends on PC manufacturer to actually implement a (or two) PHY 10Gbase-T port(s). I doubt many would do, though I wish the manufaturers would deliver 2.5G or 5G in transition for less expensive and lower power parts.


To be honest, optic is soooo much cheaper than 10G-T. I've upgraded my home network to 10G in parts (not each link, but critical ones: for NAS and for my workstation) and everything is a lot cheaper with optics. Switch are much cheaper per port, transceivers are dirt-cheap on eBay, Intel X520-DA is cheaper (even if you add price of transceivers) than -T variant, not to mention old Mellanox cards.

If distance allows DAC patchcords are cheap too, and single-mod fiber cables with LC2 connectors cost almost as good twisted pair, only slightly more.

One disadvantage is that LC2 connectors are rather bulky and you could not install them yourself as easy as RJ45 ones, so if you need to route cable through very confined space, it could be a problem.


I've been using second hand Mellanox for about a decade or so and it's been phenomenal. Great speed, latency and stability.


Ditto. A few times I've had issues where I had compatibility issues until I upgraded the firmware on the card.

For a long time I used infiniband between fast systems just because the cards were available so cheaply, but now 40gbe is really cheap (and 100gbe is getting there).

I'm really not that fond of 10G-T, it's picky about cables and power hungry. Give me fiber or a dac cable any day.


Cheaper used, and not cheaper if you have to to actually pull and terminate OM4 cable. Plus, the long term costs tend to be higher, because optical SFPs are the modern equivalent of spinning rust when it comes to failure rates.

And the prices of 10GbaseT parts are falling too. And for home use, it runs just fine on quality cat5e.


It's been 20+ years since 1 GbE came around, 10 GbE came in 2006, but for some reason the adoption has lagged hugely in SMB/consumer gear.


1 Gbe is just marginally slower than consumer grade HDD and matches the fastest residential/SME internet connections. For most 1Gbe was a noticeable upgrade while 10Gbe isn't, while cost of hardware is noticeably higher.

10Gbe has decent adoption where it makes sense.


Modern systems have NVME SSDs which can easily fill a 10 gig connection, both read and write speed. It would be nice to see higher speed ethernet on more hardware. I have some 10 gig equipment but it's a couple generations old ebay special.


Disaggregated home SSD? Err, wow!


What do you mean by disaggregated?

SSD is certainly cheap enough for home NAS now.


One of my external HHDs can work at 250MB/s which means 2GbE. I'd like my home NAS to have at least a 2.5GbE NIC and my router/switch have to be 10GbE so as I don't saturate them while transferring from/to the NAS.

I also plan on adding a few 1080p@30fps security cameras and a centralised LAN host for all the video streams/backups. It will all add up quite quickly and I'm not even sure a home 10GbE setup will hold its own well in such conditions. I plan my buy an expensive switch/router combo with up to 320GbE bandwidth though, so it might work.

I believe 10GbE will be with us for as the de-facto prosumer setup for quite a while since 1.25GB/s is going to work for 99% of what's needed out there. (Although uncompressed 4k@60fps is almost 2GB/s...)


Its been an artificial market segmentation ploy used by a number of vendors. That ploy is now failing since the primary users of 10G have moved on to faster standards.


Now that I’ve been working on service provider routers for a bit, anything lower than 100G sounds slow. All about perspective, I suppose.


Ans we are finally starting to see residential high-speed internet at 1G becoming avaliable and widespread.


Bell Canada even offers 1.5Gbps internet and ships a router that only has 1GbE ports, so I had to jump through some hoops to remove the SFP+ from the router they gave me and get it working on my pfSense box. I'm still limited on most of my network to 1Gbps, but at least a few hardwired machines have access to the full 1.5Gbps/1.0Gbps and 10Gbps locally.

Things would be so much easier if all copper ports were at least 2.5Gbps, preferably 10Gbps.


That's some serious dedication!

I would like to upgrade my setup to higher speeds, but the cost of 2.5/5 Gbps networking gear is very difficult to justify. My entire kubernetes cluster is cheaper, and it's much better value/investment.


You could always bond a couple of 1Gbps ports together, not as elegant, but certainly something hackable with what you have perhaps and with that - cheaper and good stop gap to eek out a little bit more life.


I don't see why it wouldn't be able to drive them at full speed. Zen cores (even at that TDP) plus the standard Ethernet hardware offloading should be able to handle that and a lot of extra processing without trouble.


As long FreeBSD plays nice* and hw components are chosen to have drivers, it seems like the perfect chip to put on a SBC for a compact, solid-state pfSense box.

* iXSystems mentioned to avoid AMD for FreeNAS, although maybe it is obsolete advice or doesn't apply to pfSense.


AMD APUs are quite popular as passively cooled routers running pfSense or OPNsense.


After looking at specs and prices, I think I'll just get a used SFF ECC workstation and slot in a quad 10 GbE copper card and a cryptographic accelerator. I can't see the point in paying more for new *sense-specific gear. It might have a fan, but I can always throw in a Noctua if I'm worried about it.


> iXSystems mentioned to avoid AMD for FreeNAS

This seems odd. Long ago FreeNAS on the N54L microserver was a nice setup and it had some AMD chip. I suspect it might apply only to some bleeding edge stuff...


This is also a good time for HPE to make a Proliant Gen11 with Ryzen..the Gen10 is really inefficient in terms of speed and power consumption


They just released a microserver gen10+ .... and that one just switched from opterons back to xeon's.

I'd like to see EPYC's and Ryzens, but i'm not holding my breath


I have a Gen8 and it has a Xeon.

I just want to put some perspective to “switched from”, because HP _used_ Xeon processors in its prosumer servers, so I don’t really see switching them back.


I think you wanted to say a Proliant microserver? Though, I would suggest them to throw 3rd gen Ryzen into their rack mounted servers as well.

I was to Embedded World few days ago, and I saw that Acer, Quanta, MSI, Gigabyte are all quietly showing Ryzen based U1 servers.



AMD is now riding ahead of Intel in the hardware security arena, by like a mile.

Today’s Flush-Flush KASLR vulnerability

http://cc0x1f.net/publications/kaslr.pdf


This makes me wonder, do companies spend resources trying to find flaws in competitors' products and publicize them? It would make sense.


But with this logic, intel could finance this so much better than AMD. But we still see mostly Intel with security issues


Maybe AMD just doesn't have that many issues.


That's what I wanted to say :)


The threadripper 3970x seems the the ultimate sweet spot on perf & cost.

It has 32-cores all operating at an amazing 3.7Ghz (a solid 40% freq faster than most cpu in market and especially at that core count)

I’d love to have a rack mounted server with a 3970x

https://www.amd.com/en/products/cpu/amd-ryzen-threadripper-3...


I just built one last week! Love it so far. Running esxi and virtualizing everything I can think of. The pcie 4.0 nvme speeds are incredible


I wonder if the R1102G can playback 4k video, would be awesome for a small media pc passively cooled with a snes emulator.


I've recommended this before - you don't need a special/low power processor for that!

Technically, any processor with a hardware video decoder will work fine as long as you limit the TDP to something your cooling can manage. Especially mobile processors.

You can easily limit TDP on any major OS - now if you'd use some custom software, you'd need to implement it yourself, it's rather easy to do via MSR editing, Intel has their list of processor MSR's freely available.


This will be very nice CPU in a home media Kodi server consuming very little power!


Is there any clue what the MSRP is going to be for these chips? I'd like to get a sense of how much a SBC mounting one of these is going to cost.


how does this compare to a raspberry pi?


In addition to being faster CPUs, these have much faster I/O buses.

The RPi 3 could only talk to its 1GBit card at ~300MBit. The 4 can max out 1GBit.

These have 2x10GBit in the CPU package.

Put another way, these can be a high performance NAS/router, and simultaneously run the equivalent of a few RPis in VMs. Previously, AMD built an APU without embedded video.

Hopefully they will continue to do so, and those will sell well into the embedded/appliance space.


And ram capacity, which makes a huge difference with modern software stacks.


I believe these Ryzen APUs will run much faster than a Raspberry Pi, and they run x86 software as is commonly used by Windows. Glancing at the ~4,000 Passmark score I think these Ryzen APUs could even keep up with a single 1080p software video transcode, which I doubt is possible on a Raspberry Pi.

Disclaimer: I only have an RPi 2 running a home server at the moment and have not tried these new. Ryzen chips.


A lot more powerful. Example: https://www.smachz.com


Honestly if you are not long on AMD by now, you are missing out. AMD will defeat Intel. It's just a matter of time.


If you are going to use a marketing buzzword like "APU" in the title, please do us a favour and explain what it means.

https://en.wikipedia.org/wiki/AMD_Accelerated_Processing_Uni...

APU = "the marketing term for a series of 64-bit microprocessors from AMD..."

Excuse me for not keeping up with the latest corporate-speak made-up BS.


Forgiven! And here's an explanation for others that doesn't require a new tab:

APU = CPU + iGPU (on the same die)

AMD CPUs contain no integrated graphics, so this makes the function of AMD's APU (c. 2011) branding similar to Intel's historical i-series (c. 2008).

Don't ask me to explain Intel's current i-series branding, though. Apparently as of 2018, not all i3/i5/i7/i9s contain integrated graphics, so it's not quite as clear as AMD's "APU means on-die iGPU" is today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: