Gå til innhold

Anbefalte innlegg

 

PPD er visstnok forventet til å være 55742 med GTX 1060 3GB 

 

Nå vet jeg ikke akkurat hva GTX 1060 skal klare av PPD, men jeg vet det er mer enn 55742 iallefall. La den gå en stund.

 

Ser ut som du har rett - her er det en som mener han er rundt 400k PPD med 1060 http://www.tomshardware.co.uk/forum/id-3168717/gtx-1060-folding-home-fah-ppd-performance.html

 

Skal la den stå over natten og rapportere resultatet.

 

EDIT: Hoppet nå opp til ca 300k PPD

Endret av WildCard_
Lenke til kommentar
Videoannonse
Annonse

Leste at de nye nVidia 378.66 driverene støtter OpenCL 2.0. Noen som vet om det gir oss en fordel i GPU foldinga?

 

 

Godt spørsmål, jeg er usikker selv. For de som har mer kompetanse om "the inner working" av grafikkort her er en oversikt over ny funksjonalitet fra OpenCL 2.0/2.1/2.2 (hentet fra Wikipedia):

 

 

 

OpenCL 2.0
  • Shared virtual memory
  • Nested parallelism
  • Generic address space
  • Images
  • C11 atomics
  • Pipes
  • Android installable client driver extension
OpenCL 2.1

The ratification and release of the OpenCL 2.1 provisional specification was announced on March 3, 2015 at the Game Developer Conference in San Francisco. It was released on November 16, 2015.[34] It replaces the OpenCL C kernel language with OpenCL C++, a subset of C++14. Vulkan and OpenCL 2.1 share SPIR-V as an intermediate representation allowing high-level language front-ends to share a common compilation target. Updates to the OpenCL API include:

  • Additional subgroup functionality
  • Copying of kernel objects and states
  • Low-latency device timer queries
  • Ingestion of SPIR-V code by runtime
  • Execution priority hints for queues
  • Zero-sized dispatches from host

AMD, ARM, Intel, HPC, and YetiWare have declared support for OpenCL 2.1.[35][36]

OpenCL 2.2

OpenCL 2.2 brings the OpenCL C++ kernel language into the core specification for significantly enhanced parallel programming productivity:[37][38][39]

  • The OpenCL C++ kernel language is a static subset of the C++14 standard and includes classes, templates, lambda expressions, function overloads and many other constructs for generic and meta-programming.
  • Leverages the new Khronos SPIR-V 1.1 intermediate language which fully supports the OpenCL C++ kernel language.
  • OpenCL library functions can now take advantage of the C++ language to provide increased safety and reduced undefined behavior while accessing features such as atomics, iterators, images, samplers, pipes, and device queue built-in types and address spaces.
  • Pipe storage is a new device-side type in OpenCL 2.2 that is useful for FPGA implementations by making connectivity size and type known at compile time, enabling efficient device-scope communication between kernels.
  • OpenCL 2.2 also includes features for enhanced optimization of generated code: Applications can provide the value of specialization constant at SPIR-V compilation time, a new query can detect non-trivial constructors and destructors of program scope global objects, and user callbacks can be set at program release time.
  • Runs on any OpenCL 2.0-capable hardware (Only driver update required)

 

 

 

Her er infoen fra changelogen til driveren, fra NVIDIA selv:

 

 

OpenCL 2.0

New features in OpenCL 2.0 are available in the driver for evaluation purposes only. The

following are the features as well as a description of known issues in the driver:

 

Device side enqueue

• The current implementation is limited to 64-bit platforms only.

• OpenCL 2.0 allows kernels to be enqueued with  global_work_size larger than the

compute capability of the NVIDIA GPU. The current implementation supports only

combinations of  global_work_size and  local_work_size that are within the compute

capability of the NVIDIA GPU.

 

The maximum supported CUDA grid and block size of NVIDIA GPUs is available

at http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-

capabilities.

 

For a given grid dimension, the  global_work_size can be determined by

CUDA grid size x CUDA block size.

• For executing kernels (whether from the host or the device), OpenCL 2.0 supports

non-uniform ND-ranges where  global_work_size does not need to be divisible by

the  local_work_size . This capability is not yet supported in the NVIDIA driver, and

therefore not supported for device side kernel enqueues.

 

Shared virtual memory

• The current implementation of shared virtual memory is limited to 64-bit platforms

only.

 

Lenke til kommentar

 

 

Du ligger på 67 plass for overall WU. Jeg når ikke så veldig mye lenger opp på den rankingen uten å kjøpe inn mer hardware. Kommer nok snart til å begynne å folde på GPU når jeg har klatret litt til.

 

Jeg klatrer vel sakte oppover den lista også. Har en maskin på NaCl. 

Lenke til kommentar

Jeg hadde hjulpet om jeg visste svaret. Jeg bruker 376.48, men regnet med det var en nyere noen kunne anbefale.

I don't need game-ready drivers ... I need FAH-ready drivers.

 

Så den støtter c21 0018 om jeg husker rett.

PS: er ikke du som er problemet. :)

Men en annen ting, hvorfor går antall wu ned på laget. :)

Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
  • Hvem er aktive   0 medlemmer

    • Ingen innloggede medlemmer aktive
×
×
  • Opprett ny...