OPENCL doesnt work with Radeon R5

Can't figure out how to solve an issue? Here is where to ask for help.
chilliramen
Posts: 3
Joined: Fri May 12, 2017 10:38 am

OPENCL doesnt work with Radeon R5

Postby chilliramen » Fri May 12, 2017 8:17 pm

Hi all,

I am very new both in DAZ3D and Reality. I can't find how to make the OpenCL work in my GPU Radeon R5 M430.

When I tried to render a scene from DAZ3D, the LuxRender log said:
[2017-05-13 07:12:46 Severe error: 2] RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 4 instead of 3

I don't really understand whether the Reality would read my GPU. While it mention AMD Parallel Processing, the lists seems to missed the AMD. It only mention ICore and Hainan. Please see attachment.

I already tried install AMD Driver, but the lists doesn't change. Is there anyway I can make the OpenCL worked?

Btw, I am sorry if I missed a similar thread with this kind of problem.
Attachments
Screenshot 2017-05-13 07.13.38.png

User avatar
sigstan
Posts: 431
Joined: Sat Jan 24, 2015 3:59 pm
Location: Denmark

Re: OPENCL doesnt work with Radeon R5

Postby sigstan » Sat May 13, 2017 3:15 pm

First of all the OpenCL API code name for the R5 M430 card is in fact Hainan, which means your GPU is being detected correctly.

However, I'm not sure why you have two occurrences of Intel Core i5 7200U. Even for a dual CPU configuration I believe it should only list it twice. But I can see that the Intel driver seems to be quite new v2.1. I run with Intel 1.2 driver, so not sure about the impact of the newer driver.

Do you have any issues with CPU-accelerated rendering? - That is where you should start, when getting to know Reality/Luxrender.

That being said the keyword when dealing with OpenCL is to keep it simple for the initial tests.

Make a simple scene with a single camera, a primitive like a sphere and IBL for lighting. Keep a low frame size like 960x720 to avoid running out of GPU memory.

Based on your "Scene configuration" screen shot you should:
  • Remove the tick in extra-boost (option may sound cool, but it is biased and in your case just adds complexity).
  • Set OpenCL Group Size to either zero or 64, with a preference for the latter. Using zero should trigger auto detect, but this does not always work
  • Stick with just the GPU initially, so in your case only have Hainan ticked
  • Set FFs removal to zero
  • Set Light Strategy to Auto
  • Set Max path length to 13
Hopefully this should start a render once the Kernel compiling have completed.

If you still have your original issue you can switch Log detail information to "Debug Information" on the Renderer tab. Luxrender log should then contain a section that shows how it thinks your setup is looking. The interesting section should look something like (example from my setup, which is a dual CPU configuration):

Code: Select all

[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] OpenCL Platform 1: NVIDIA Corporation
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 name: NativeThread
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 type: NATIVE_THREAD
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 compute units: 1
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 preferred float vector width: 4
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 max allocable memory: 0MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 max allocable memory block size: 0MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 name:       Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 type: OPENCL_CPU
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 compute units: 24
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 preferred float vector width: 8
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 max allocable memory: 32718MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 max allocable memory block size: 8179MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 name: GeForce GTX 970
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 type: OPENCL_GPU
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 compute units: 13
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 preferred float vector width: 1
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 max allocable memory: 4096MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 max allocable memory block size: 1024MBytes

Also see this thread: viewtopic.php?f=28&t=674

Hope this helps.
/Sigstan

chilliramen
Posts: 3
Joined: Fri May 12, 2017 10:38 am

Re: OPENCL doesnt work with Radeon R5

Postby chilliramen » Sun May 14, 2017 4:38 am

Hey Sigstan,

My CPU-Accelerated tryout was very slow, even compare to usual IRay render. That maybe because I don't really try to change anything.

Thank you for pointing out things for newbie like me to follow. Let met get into it; and back for you later.

chilliramen
Posts: 3
Joined: Fri May 12, 2017 10:38 am

Re: OPENCL doesnt work with Radeon R5

Postby chilliramen » Sun May 14, 2017 5:00 am

sigstan wrote:First of all the OpenCL API code name for the R5 M430 card is in fact Hainan, which means your GPU is being detected correctly.

Make a simple scene with a single camera, a primitive like a sphere and IBL for lighting. Keep a low frame size like 960x720 to avoid running out of GPU memory.

Based on your "Scene configuration" screen shot you should:
  • Remove the tick in extra-boost (option may sound cool, but it is biased and in your case just adds complexity).
  • Set OpenCL Group Size to either zero or 64, with a preference for the latter. Using zero should trigger auto detect, but this does not always work
  • Stick with just the GPU initially, so in your case only have Hainan ticked
  • Set FFs removal to zero
  • Set Light Strategy to Auto
  • Set Max path length to 13
Hopefully this should start a render once the Kernel compiling have completed.

If you still have your original issue you can switch Log detail information to "Debug Information" on the Renderer tab. Luxrender log should then contain a section that shows how it thinks your setup is looking. The interesting section should look something like (example from my setup, which is a dual CPU configuration):

Code: Select all

[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] OpenCL Platform 1: NVIDIA Corporation
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 name: NativeThread
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 type: NATIVE_THREAD
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 compute units: 1
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 preferred float vector width: 4
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 max allocable memory: 0MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 0 max allocable memory block size: 0MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 name:       Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 type: OPENCL_CPU
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 compute units: 24
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 preferred float vector width: 8
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 max allocable memory: 32718MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 1 max allocable memory block size: 8179MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 name: GeForce GTX 970
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 type: OPENCL_GPU
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 compute units: 13
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 preferred float vector width: 1
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 max allocable memory: 4096MBytes
[2017-05-13 20:28:24 Debug: 0] [LuxRays][11.828] Device 2 max allocable memory block size: 1024MBytes

Also see this thread: viewtopic.php?f=28&t=674

Hope this helps.


Sigstan,

I tried all of your configuration in Reality. But, the result remains. Even the log persists. I read that for AMD, I have to revert the driver to ver 15, which I can't access anymore in AMD or ATI website.

[2017-05-14 15:57:24 Severe error: 2] RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 4 instead of 3

Is there any chance also that I could have faster render even with CPU-Acceleration?

Thanks!
Attachments
Screenshot 2017-05-14 15.58.36.png

User avatar
sigstan
Posts: 431
Joined: Sat Jan 24, 2015 3:59 pm
Location: Denmark

Re: OPENCL doesnt work with Radeon R5

Postby sigstan » Sun May 14, 2017 10:22 am

Hi Chilliramen

The debug log should tell you which OpenCL devices Luxrender has detected. Sometimes it is an internal GPU that can might just be disabled, if not used anyway.

It is also in these cases possible to run a render after performing a slight edit on the scene *.lxs file.

If you look in the luxrender files for your scene there should be a *.lxs file. This file contains a section which will look something like this:
Renderer "luxcore"
"string config" [
"renderengine.type = PATHOCL"
"opencl.devices.select = 100"
"opencl.gpu.workgroup.size = 64"
]

The line in bold is the one that Luxrender for some reason is complaining about. This is a set of flags turning on/off your OpenCL devices, so the one signifies a device turned on. Luxrender apparently expects 4 devices here, but Reality is only supplying 3.

You can try to edit the file and add a zero to the end of the string (example 1000) and see if the scene can render. Simply go to the Reality directory and down into the Lux directory (typically C:\Program Files\Reality_DS\lux) where you will find luxrender.exe. Run the program and from File -> Open select the *.lxs file you have edited.

As to using CPU-acceleration it can be relatively fast, but it depends very much on your hardware. With CPU-acceleration and default settings (Mono-directional/Sobol) it should as far as I know render faster than iRay in CPU mode in most scenes.

For an intro to Reality rendering take a look at PlatnumK's tutorial: http://preta3d.com/forums/viewtopic.php?f=23&t=643#p5001
/Sigstan

jtarrier
Posts: 2
Joined: Mon Oct 09, 2017 6:01 am

Re: OPENCL doesnt work with Radeon R5

Postby jtarrier » Mon Oct 09, 2017 6:10 am

chilliramen wrote:Sigstan,

I tried all of your configuration in Reality. But, the result remains. Even the log persists. I read that for AMD, I have to revert the driver to ver 15, which I can't access anymore in AMD or ATI website.

[2017-05-14 15:57:24 Severe error: 2] RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 4 instead of 3

Is there any chance also that I could have faster render even with CPU-Acceleration?

Thanks!


Hello chilliramen

If you find you need older versions of the Radeon drivers this is a great resource with a bunch of links back to the original AMD website:
ATI Driver Version Cheat Sheet
http://www.hal6000.com/seti/boinc_ati_g ... _sheet.htm

Best regards,
Jeremy


Return to “Troubleshooting”

Who is online

Users browsing this forum: No registered users and 8 guests