The rCUDA team has been working during the last months in order to create an rCUDA module for the SLURM scheduler. This scheduler, which is used in many clusters around the world, efficiently dispatches computing jobs to the different nodes of the cluster. However, when a job requires the use of one or several GPUs, the SLURM scheduler assumes that those GPUs will be local to the node where the job is targeted, thus hindering the use of remote GPU virtualization frameworks such as rCUDA. With the new module created by the rCUDA team, the SLURM scheduler is aware of the use of remote GPU virtualizaton, therefore making possible sharing the GPUs available in the cluster among the several applications demanding them, independently of the exact node where the application is being executed and also independently of the exact node where the GPU is located. The new module allows to schedule GPUs in two ways: in an exclusive way, or concurrently sharing the GPUs among several applications. Notice that in both cases the use of remote GPUs is feasible.

The rCUDA team is glad to announce that its remote GPU virtualization technology now supports the ARM processor architecture. The new release of rCUDA for this low-power processor has been developed for the Ubuntu 11.04 and Ubuntu 12.04 ARM linux distributions. With this new rCUDA release, it is also possible to leverage hybrid platforms where the application uses ARM CPUs while requesting acceleration services provided by remote GPUs installed in x86 nodes. The opposite is also possible: an application running in an x86 computer can access remote GPUs attached to ARM systems

The rCUDA remote GPU virtualization framework was recently updated in order to provide support for the last CUDA 5.0 release. Now, with the launch of the new CUDA 5.5 release candidate, the rCUDA team has tested the rCUDA middleware with this new CUDA release and is working in updating rCUDA to the new CUDA version.

The rCUDA team has been working for the last 6 months on porting the rCUDA middleware to the ARM processor architecture. The port has been made on the CARMA and KAYLA nVIDIA development platforms. We are now completing the first release of rCUDA for this low-power processor, which will be available very soon. Our tests include running an application on the ARM processor that demands GPGPU services on an x86 remote rCUDA server. Conversely, it is also possible to locate the GPU on an ARM-based server and demand GPGPU services from an x86 computer.


The new features of rCUDA, as well as its last developments, were presented on June 16 at the HPC Advisory Council European Conference 2013, held in Leipzig (Germany) during the International Supercomputing Conference (ISC’13). The slides of the presentation are available here. You can also access a video with the presentation in this link.