RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Can't figure out how to solve an issue? Here is where to ask for help.
cmfrederiksen
Posts: 3
Joined: Sat Dec 08, 2018 11:34 am

RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby cmfrederiksen » Sat Dec 08, 2018 11:45 am

Hi,

I know this problem has been up before.

I did a debug and discovered what is listen below.d
It count 2 devices. My NVIDIA card and my CPU. But I also have an Intel(R) HD Graphics 630 that is use for my screen (I have a gaming computer).

How do I avoid including the Intel(R) HD Graphics 630 in the counting (In reality it only show the GeForce GTX 1060 and Intel(R) Core(TM) i5-7300HQ CPU )?

[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] OpenCL Platform 0: NVIDIA Corporation
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] OpenCL Platform 1: Intel(R) Corporation
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 0 name: NativeThread
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 0 type: NATIVE_THREAD
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 0 compute units: 1
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 0 preferred float vector width: 4
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 0 max allocable memory: 0MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 0 max allocable memory block size: 0MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 1 name: GeForce GTX 1060
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 1 type: OPENCL_GPU
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 1 compute units: 10
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 1 preferred float vector width: 1
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 1 max allocable memory: 6144MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 1 max allocable memory block size: 1536MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 2 name: Intel(R) HD Graphics 630
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 2 type: OPENCL_GPU
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 2 compute units: 23
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 2 preferred float vector width: 1
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 2 max allocable memory: 3206MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 2 max allocable memory block size: 2047MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 3 name: Intel(R) Core(TM) i5-7300HQ CPU @ 2.50GHz
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 3 type: OPENCL_CPU
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 3 compute units: 4
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 3 preferred float vector width: 8
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 3 max allocable memory: 8035MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Device 3 max allocable memory block size: 2008MBytes
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Preprocessing DataSet
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Total vertex count: 75094
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Total triangle count: 141440
[2018-12-08 16:30:18 Debug: 0] [LuxRays][2.219] Preprocessing DataSet done
[2018-12-08 16:30:18 Severe error: 2] RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2


Thanks in advance.

Casper

User avatar
sigstan
Posts: 434
Joined: Sat Jan 24, 2015 3:59 pm
Location: Denmark

Re: RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby sigstan » Sat Dec 08, 2018 3:17 pm

cmfrederiksen wrote:I know this problem has been up before.

I did a debug and discovered what is listen below.d
It count 2 devices. My NVIDIA card and my CPU. But I also have an Intel(R) HD Graphics 630 that is use for my screen (I have a gaming computer).

How do I avoid including the Intel(R) HD Graphics 630 in the counting (In reality it only show the GeForce GTX 1060 and Intel(R) Core(TM) i5-7300HQ CPU )?

Just to be clear, this is a known Reality OpenCL detection bug.

There are only two ways of handling this issue:
  • Disable the integrated GPU (HDG 630 in your case). Best done in BIOS, but possibly disabling in Windows setup might be enough. I have not tried the latter, so no clue whether this would work.
  • Otherwise export scenefiles and manually edit the *.lxs file manually to ensure the correct bits are set (in your case try "opencl.devices.select = 101") and then start render manually
Sorry, but those are the only working options at the moment.
/Sigstan

DeviantShade
Posts: 45
Joined: Sat Mar 26, 2016 6:10 pm

Re: RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby DeviantShade » Sat Dec 08, 2018 4:22 pm

I use a second machine for network rendering which has a different number of OpenCL devices than my main machine. This means I can't get it to work for OpenCL rendering without disabling on of the extra devices. Is there a way to get a functional workaround for this?
Crossing the Uncanny Valley

User avatar
sigstan
Posts: 434
Joined: Sat Jan 24, 2015 3:59 pm
Location: Denmark

Re: RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby sigstan » Sun Dec 09, 2018 4:37 am

DeviantShade wrote:I use a second machine for network rendering which has a different number of OpenCL devices than my main machine. This means I can't get it to work for OpenCL rendering without disabling on of the extra devices. Is there a way to get a functional workaround for this?

As far as I remember, even if you have the same number of OpenCL devices on both master and slave(s), then you're still not be able to perform an OpenCL network render, unless the OpenCL devices involved are identical on all machines. This means network rendering is locked to CPU-generated rendering in most cases.
/Sigstan

cmfrederiksen
Posts: 3
Joined: Sat Dec 08, 2018 11:34 am

Re: RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby cmfrederiksen » Sun Dec 09, 2018 9:23 am

sigstan wrote:
cmfrederiksen wrote:I know this problem has been up before.

I did a debug and discovered what is listen below.d
It count 2 devices. My NVIDIA card and my CPU. But I also have an Intel(R) HD Graphics 630 that is use for my screen (I have a gaming computer).

How do I avoid including the Intel(R) HD Graphics 630 in the counting (In reality it only show the GeForce GTX 1060 and Intel(R) Core(TM) i5-7300HQ CPU )?

Just to be clear, this is a known Reality OpenCL detection bug.

There are only two ways of handling this issue:
  • Disable the integrated GPU (HDG 630 in your case). Best done in BIOS, but possibly disabling in Windows setup might be enough. I have not tried the latter, so no clue whether this would work.
  • Otherwise export scenefiles and manually edit the *.lxs file manually to ensure the correct bits are set (in your case try "opencl.devices.select = 101") and then start render manually
Sorry, but those are the only working options at the moment.


Hi sigstan. I didn't realize it was a known bug. I decided to go for the edit *lxs file option. This worked.


Thank you very much.

Casper

DeviantShade
Posts: 45
Joined: Sat Mar 26, 2016 6:10 pm

Re: RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby DeviantShade » Sun Dec 09, 2018 1:26 pm

sigstan wrote:
DeviantShade wrote:I use a second machine for network rendering which has a different number of OpenCL devices than my main machine. This means I can't get it to work for OpenCL rendering without disabling on of the extra devices. Is there a way to get a functional workaround for this?

As far as I remember, even if you have the same number of OpenCL devices on both master and slave(s), then you're still not be able to perform an OpenCL network render, unless the OpenCL devices involved are identical on all machines. This means network rendering is locked to CPU-generated rendering in most cases.


I swear I had network OpenCL rendering working at one point before I updated my drivers. my two machines use an AMD and Nvidia GPU respectively. Hopefully we'll get a reliable workflow in the future. I do animation in Lux and relying on single GPU is far too time consuming.
Crossing the Uncanny Valley

cmfrederiksen
Posts: 3
Joined: Sat Dec 08, 2018 11:34 am

Re: RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby cmfrederiksen » Sun Dec 09, 2018 3:12 pm

DeviantShade wrote:
sigstan wrote:
DeviantShade wrote:I use a second machine for network rendering which has a different number of OpenCL devices than my main machine. This means I can't get it to work for OpenCL rendering without disabling on of the extra devices. Is there a way to get a functional workaround for this?

As far as I remember, even if you have the same number of OpenCL devices on both master and slave(s), then you're still not be able to perform an OpenCL network render, unless the OpenCL devices involved are identical on all machines. This means network rendering is locked to CPU-generated rendering in most cases.


I swear I had network OpenCL rendering working at one point before I updated my drivers. my two machines use an AMD and Nvidia GPU respectively. Hopefully we'll get a reliable workflow in the future. I do animation in Lux and relying on single GPU is far too time consuming.



Also I should add, I didn't get the error if I only selected CPU rendering.

Casper

User avatar
sigstan
Posts: 434
Joined: Sat Jan 24, 2015 3:59 pm
Location: Denmark

Re: RUNTIME ERROR: OpenCL device selection string has the wrong length, must be 3 instead of 2

Postby sigstan » Mon Dec 10, 2018 5:26 pm

DeviantShade wrote:I swear I had network OpenCL rendering working at one point before I updated my drivers. my two machines use an AMD and Nvidia GPU respectively. Hopefully we'll get a reliable workflow in the future. I do animation in Lux and relying on single GPU is far too time consuming.

I never heard about anyone being successful with such a setup, but then it could have been driver issues.
Assuming it is possible, then as a minimum the GPU would have to have same amount of VRAM, otherwise rendering would be limited by the lowest amount. But then the question arises on whether Nvidia and AMD use the same VRAM management and therefore have the same limits. I see lots of potential issues :(
/Sigstan


Return to “Troubleshooting”

Who is online

Users browsing this forum: No registered users and 4 guests