Latest News

Top Five

The new rCUDA version is able to safely partition the memory of a GPU among applications

User Rating:  / 0
PoorBest 
AddThis Social Bookmark Button
Change letter size:

The rCUDA Team is happy to disclose that the new rCUDA version (not released yet) is able to create isolated partitions of the GPU memory and provide each partition to an application. This can be done without having to use virtual machines or hypervisors. In this way, it is possible to split the memory of a GPU into a large amount of sealed partitions, each of them with different size. For instance, it is possible to partition a GPU with 32 GB into 29 partitions, where 1 partition is sized 8 GB, 2 partitions are sized 3 GB each, 10 partitions are sized 1 GB each, and 16 partitions have 0.5 GB. Another possibility is creating 128 partitions of 0.25 GB each partition. The amount of partitions is not limited at the same time that partition size can be any amount of memory. The only limitation is that the aggregation of partitions cannot exceed GPU memory. Moreover, it is also important to remark that when an application requests more memory than that available in its partition, the application gets an error, thus avoiding interfering with other applications served by that GPU. Notice that partitioning the GPU memory can be achieved without using a virtual machine hypervisor. In this way, this partitioning feature can be used in combination with Slurm in order to provide safe GPU scheduling. On the contrary, it can also be used with virtual machines in order to safely provide concurrent usage of a remote GPU among a large amount of virtual machines.

Change letter size:

Gold Sponsors

Silver Sponsors

Logo gva

 

logo bright

logo nvidia