Beiträge von Skyye

    Hi eLuSiVe, thank you! Glad to be here!

    I am not an engineer. I have an MBA in Finance and own AMD stock, so keep track of performance to price on AMD, Nvidia and Intel CPUs and GPUs and try to forecast performance to price for upcoming products, as well.

    So just thinking big picture, Techpowerup shows the 82 CU 3090 with 10,496 shaders vs. 5120 for the 80 CU 6900XT, yet both have similar PASSMARK and timespy scores. 1/2 the 10,496 Nvidia shader number (5248) seems to make more sense.

    Thank you for the invite to this board!

    Both Nvidia (with its 3000 series) and Intel (with DG-2) just confuse the issue with compute units and shaders. It's all bullshit. If it sounds too good to be true, it probably is. Like Nvidia's published shader count for the 30 series is twice the real #. Intel's EU counts are 8X what the actual number of compute units is. They both do it to confuse people and obfuscate the truth.

    For the 46 CU, the die size is 190 mm^2 which gives between 9.4 and 11.6 BB Transistors depending on efficiency estimate (AMD's first gen 7 nm efficiency was 43% and 2nd gen 7 nm efficiency was 53% and this is the range I used), which would put the 46 CU at slightly less to slightly better than a 5700XT

    For the 64 CU, I estimate 13-16 BB transistors, which would put it less than an Xbox Series X to slightly better than a 6700 (but of course this depends on my die size estimate, in addition to the efficiency estimate)