is this bullshit?
-
@regehr @steve For example, it's a goddamn NIGHTMARE doing a high-performance memory subsystem for absolutely anything.
This whole "shared memory" fiction we're committed to maintaining is a significant drag on all HW, but HW impls of it are just in another league perf-wise than "just" building message-passing and trying to work around it in SW (lots have tried, but there's little code for it and it's a PITA), so we're kind of stuck with it.
-
@regehr @steve For example, it's a goddamn NIGHTMARE doing a high-performance memory subsystem for absolutely anything.
This whole "shared memory" fiction we're committed to maintaining is a significant drag on all HW, but HW impls of it are just in another league perf-wise than "just" building message-passing and trying to work around it in SW (lots have tried, but there's little code for it and it's a PITA), so we're kind of stuck with it.
-
@regehr @steve To wit: virtual memory is a lie, by design. Uniform memory is a lie. Shared instruction/data memory is a lie. Coherent caches are a lie, caches would rather be _anything_ else. Buses are a lie. Memory-mapped IO is IO lying about being memory. Oh and the data bits and wires are small and shitty enough now that they started lying too and everything is slowly creeping towards ECCing all the things
-
is this bullshit? or does ISA not really matter in some fictitious world where we can normalize for process and other factors?
https://www.techpowerup.com/340779/amd-claims-arm-isa-doesnt-offer-efficiency-advantage-over-x86
@regehr ARM and x86 are both godawful messes. Because that's what happens to all successful ISAs - they get cruft.
So - once you move the "old junk" to a slow path and recommend people don't use them, then you have ARM64 vs x64. And now they're pretty close in oddness, and the difference in decoder area/power is small enough that it doesn't matter.
-
@regehr ARM and x86 are both godawful messes. Because that's what happens to all successful ISAs - they get cruft.
So - once you move the "old junk" to a slow path and recommend people don't use them, then you have ARM64 vs x64. And now they're pretty close in oddness, and the difference in decoder area/power is small enough that it doesn't matter.
@TomF @regehr I’m not educated or experienced with this level of detail. So speaking purely as a consumer, how is it that the apple silicon so vastly out performs similar x86-64 machines while also using less power and generating less heat?
Is it possible to make an x86-64 system that performs as well (that is, with the same ratio of power consumption and heat)?
Marketing *clearly* wants us to think that it’s ARM. All I know is my personal interactions with the two as a regular user of both PCs and Macs. -
@TomF @regehr I’m not educated or experienced with this level of detail. So speaking purely as a consumer, how is it that the apple silicon so vastly out performs similar x86-64 machines while also using less power and generating less heat?
Is it possible to make an x86-64 system that performs as well (that is, with the same ratio of power consumption and heat)?
Marketing *clearly* wants us to think that it’s ARM. All I know is my personal interactions with the two as a regular user of both PCs and Macs.@photex @regehr Apple have an amazing team of engineers, and they focus very well. They are happy to ignore performance of legacy apps and tune for only modern use cases, and work closely with their compiler teams. The magic of a tight ecosystem!
If it was "just ARM" then the 20-30 other vendors that use ARM would also be seeing these amazing results (and would have for the last 40 years). Remember that Intel used to make ARM cores, too! Clearly, that is just marketing.
-
@photex @regehr Apple have an amazing team of engineers, and they focus very well. They are happy to ignore performance of legacy apps and tune for only modern use cases, and work closely with their compiler teams. The magic of a tight ecosystem!
If it was "just ARM" then the 20-30 other vendors that use ARM would also be seeing these amazing results (and would have for the last 40 years). Remember that Intel used to make ARM cores, too! Clearly, that is just marketing.
-
@TomF @regehr yes. I would absolutely *love* to see an x86-64 laptop hit this same level of price:performance:battery-lifetime. Some place like framework maybe.
Everyone else seems to be repackaging Chinese designs from like one or two shops and those places don’t experience sufficient market pressure to improve the situation yet.
Fingers crossed though that it happens. -
@TomF @regehr yes. I would absolutely *love* to see an x86-64 laptop hit this same level of price:performance:battery-lifetime. Some place like framework maybe.
Everyone else seems to be repackaging Chinese designs from like one or two shops and those places don’t experience sufficient market pressure to improve the situation yet.
Fingers crossed though that it happens. -
@TomF @photex @regehr yeah, it's one of those classic "oh man what crazy optimization are they doing over there!?" And it turns out there's a few, but the main thing is that when you go into sleep the firmware actually, successfully, puts all the components into their deepest sleep states. A thing PC laptop vendors can only dream about. (Plus a morbillion small accumulated improvements across the entire stack)
-
@TomF @photex @regehr yeah, it's one of those classic "oh man what crazy optimization are they doing over there!?" And it turns out there's a few, but the main thing is that when you go into sleep the firmware actually, successfully, puts all the components into their deepest sleep states. A thing PC laptop vendors can only dream about. (Plus a morbillion small accumulated improvements across the entire stack)
@TomF @photex @regehr another example, I have a super nice ASUS AMD phoenix APU based laptop, which has great battery life. However, one of AMDs newer power saving features, CPPC (broad strokes, adapting clocks with lower latency to get into low power states, faster), just does not work with my specific laptop, because ASUS haven't shipped an updated AGESA with a fix for the feature. (Unclear whether this is for good reason, but I assume it's just costly to re-validate and they don't want to)
-
undefined Oblomov ha condiviso questa discussione