Post All Your BarsWF x64 / BarsWF x64 CUDA CLient Test Results Here Please!

  • Angeber xD
    Aber irgendwie interessant, daß die CPU in der Einzelkernleistung schwächer als ein C2D @ 4 GHz ist. (48 MHash/s zu 74 MHash/s )

    "Du bist und bleibst a Mensch und du kannst eben net deine menschlichkeit überwinden."

    Dennis_50300

    2 Mal editiert, zuletzt von CryptonNite (19. Januar 2017 um 11:46)

  • [Blockierte Grafik: https://abload.de/img/phenomnppsi.png]

    XFX GeForce GTX295 @ 720 / 1440 /1220 MHz
    AMD Phenom II X6 1090T @ 4010 MHz

    Edit:
    Hm... Ein Tool zu finden, daß beide GPUs übertaktet, scheint gar nicht so leicht zu sein. Na ja, der gute, alte RivaTuner packts :spitze:

    Hier ist der korrigierte Shot:
    [Blockierte Grafik: https://abload.de/img/phenomb9kw2.png]

    "Du bist und bleibst a Mensch und du kannst eben net deine menschlichkeit überwinden."

    Dennis_50300

    Einmal editiert, zuletzt von CryptonNite (30. Januar 2017 um 08:03)

  • Both tools do the same thing and both have a benchmark function. So what's your point?


    Questions:

    • Does hashcat do anything better than BarsWF?
    • Are results between BarsWF and Hashcat directly comparable?


    What I can see is that hashcat is portable across a much wider range of operating systems, but that doesn't mean that BarsWF shouldn't coexist. Being an offline benchmarking software there are likely no critical security implications either. If results aren't comparable directly, both tools should just coexist (consider the probably large amount of existing results), and that's it.

    Edit: There is another issue as well, which is that hashcat relies on the Intel OpenCL runtime. That runtime has a history of quickly ruling out deprecated microarchitectures and operating systems even if still widespread. The current SDK/runtime v16.1 won't even allow you to benchmark a Core 2 processor (SSE4.2 required). That's quite limiting.

    While I did manage to install Intels' runtime on CentOS 6.8, compile and link hashcat using a homebrewn GCC 6.2.0 and actually run it on a first gen Core i7, I can't say it's a valid replacement for BarsWF in my opinion, given the circumstances. So BarsWF should live on as well!

    1-6000-banner-88x31-jpg

    Stolzer Besitzer eines 3dfx Voodoo5 6000 AGP Prototypen:

    • 3dfx Voodoo5 6000 AGP HiNT Rev.A-3700

    [//wp.xin.at] - No RISC, no fun!

    QotY: Girls Love, BEST Love; 2018 - Lo and behold, for it is the third Coming; The third great Year of Yuri, citric as it may be! Edit: 2019 wasn't too bad either... Edit: 2020... holy crap, we're on a roll here~♡!

    Quote Bier.jpg@IRC 2020: "Je schlimmer der Fetisch, desto besser!"

    Einmal editiert, zuletzt von GrandAdmiralThrawn (30. Januar 2017 um 08:40)

  • Both tools do the same thing and both have a benchmark function. So what's your point?


    Questions:

    • Does hashcat do anything better than BarsWF?
    • Are results between BarsWF and Hashcat directly comparable?


    What I can see is that hashcat is portable across a much wider range of operating systems, but that doesn't mean that BarsWF shouldn't coexist. Being an offline benchmarking software there are likely no critical security implications either. If results aren't comparable directly, both tools should just coexist (consider the probably large amount of existing results), and that's it.

    Edit: There is another issue as well, which is that hashcat relies on the Intel OpenCL runtime. That runtime has a history of quickly ruling out deprecated microarchitectures and operating systems even if still widespread. The current SDK/runtime v16.1 won't even allow you to benchmark a Core 2 processor (SSE4.2 required). That's quite limiting.

    While I did manage to install Intels' runtime on CentOS 6.8, compile and link hashcat using a homebrewn GCC 6.2.0 and actually run it on a first gen Core i7, I can't say it's a valid replacement for BarsWF in my opinion, given the circumstances. So BarsWF should live on as well!

    He probably doesn't have anything new yet I still use this to measure performance difference between old mid aged and new gen CPU's & CUDA GPGPU's it still does it's job and the varied scores help give the users and myself a comparable real performance difference, the Pascal GP104 scores are insane, even I would like to see some TITAN X Maxwell 2.0 & TITAN X Pascal scores just to compare.

    If that is possible anyways.
    So yes BarsWF x64 CUDA may be old and it may miss some things but it gets the job done without the unwanted fancies that all those crappy 3D Mark programs do contain.
    But your x264 benchmark is a must have as well that with BarsWF x64 CUDA is all we need really, your benchmark is what I advice for non CUDA users, it is more user friendly so you kind of have a win win for that part xD :topmodel:

  • Oh many thanks for that test run on your GP102-350-K1-A1 OutOfRange! :respekt: :respekt:
    Very good info for my project @ Bier.jpg also many thanks for your input too! :respekt: :respekt: as that GM200-400-A1 goes, very good info, didn't have test runs with those yet :D

    Here my Dual 12 Core AMD OpteronMP 6180 SE D1 and it's new EVGA GTX 1080 FTW2 iCX doing a test, got me a nice 21.1K MD5 Hashes per second total system score 8)
    http://abload.de/img/21.1kyuszekwa.jpg
    844 lel which is pretty good, as that my CPU's are dated from 2012 :topmodel:
    My card uses the refresh of the GP104-400A1 which is the GP104-410-A1, it has a higher clocked GPU,11GBps VRAM and uses about the same power as the one year older GTX 1080.
    It's GP104-410-A1 is also fabricated on a16nm FinFeT Die Process I waited for this refresh since EVGA crew members told me to wait as they also ship with EVGA's newer CX Coolers, so it was worth the wait for that part.

    For more pics of my Pascal upgrade go here:
    https://www.voodooalert.de/board/index.ph…&threadID=22871

    If here are members with the TITAN X Pascal 3584 Core /96 ROP Card & TITAN Xp the 3840 core / 96 ROP card , for some test runs that would be of great honor! :thumbup:

    5 Mal editiert, zuletzt von Gold Leader (7. Juli 2017 um 12:33)

  • Never mind, only took me two minutes :)


    Oh no matter how long it took the values help me give a good image on how CPU's but also CUDA based GPGPU's develop in a wide timespan :)
    And it's a very interesting project to keep doing, my thanks to everyone that has participated here, greatly! :thumbup::respekt:

  • Und trotz der wahnwitzigen CPU ist eine einzelne GTX 580 immer noch (knapp) schneller, wenn man deine 1070er Mal aus dem Spiel ließe. ;)

    Edit: Ah, der E5-2679 v4 is ziemlich genau auf GTX 480 Level seh ich grade..

    1-6000-banner-88x31-jpg

    Stolzer Besitzer eines 3dfx Voodoo5 6000 AGP Prototypen:

    • 3dfx Voodoo5 6000 AGP HiNT Rev.A-3700

    [//wp.xin.at] - No RISC, no fun!

    QotY: Girls Love, BEST Love; 2018 - Lo and behold, for it is the third Coming; The third great Year of Yuri, citric as it may be! Edit: 2019 wasn't too bad either... Edit: 2020... holy crap, we're on a roll here~♡!

    Quote Bier.jpg@IRC 2020: "Je schlimmer der Fetisch, desto besser!"

    Einmal editiert, zuletzt von GrandAdmiralThrawn (31. Juli 2017 um 13:04)

  • Und trotz der wahnwitzigen CPU ist eine einzelne GTX 580 immer noch (knapp) schneller, wenn man deine 1070er Mal aus dem Spiel ließe. ;)

    Edit: Ah, der E5-2679 v4 is ziemlich genau auf GTX 480 Level seh ich grade..


    GTX 480 is slightly faster my GTX 480 does 1584, but still it's clsot to that of a GTX 480, quite impressive, 20 Xeon Cores compared to 480 Fermi Cores for that part 8)