cuda

  • 网络统一计算设备架构;高性能计算;计算统一设备架构;统一计算架构;粗大
cudacuda
  1. Appendix B lists the mathematical functions supported in CUDA .

    附录B列举CUDA中支持的数学函数。

  2. The result showed that CUDA could speed up calculation and be well used in real-time target tracking on upper computer .

    结果表明,CUDA的应用使上位机目标跟踪的实时性得到了很大提升,可以将其应用于其它众多领域。

  3. In the user medium with the aid of CUDA will be able to be calculated physics in games .

    在用户中的帮助下采用CUDA技术将能够计算物理游戏。

  4. In this paper , we implement an efficient matrix multiplication on GPU using NVIDIA 's CUDA .

    本文使用NVIDIA的CUDA在GPU上实现了一个高效的矩阵乘法。

  5. After experiments , comparing CPU 's computing power can be found , CUDA 's ability to process data in parallel is very strong .

    在经过实验之后,对比CPU的计算能力可以发现,CUDA在并行处理数据的能力非常强大。

  6. Meanwhile , this thesis analyzes the characteristics of Gaussian mixture model and ViBe algorithm , giving the two algorithms ' CUDA parallel solutions .

    同时,论文分析了高斯混合模型和ViBe模型的特点,给出了两种模型的CUDA并行实现解决方案,并与这两种模型的CPU平台实现进行了实验对比分析。

  7. Using NVIDIA video 's CUDA technology , hardware acceleration on cloth simulation was implemented and its frame rate was increased about several times .

    使用Nvidia显卡的统一计算架构(CUDA)技术,对布料模拟进行了硬件加速,使布料模拟的帧速得到了数十倍的提高。

  8. CUDA gives full play to the advantages of GPU Streaming Multiprocessors Array and greatly improves the efficiency of the parallel computation programs .

    倍。CUDA使GPU流处理器阵列的性能得到充分发挥,极大地提高了并行计算程序的效率。

  9. The C1060 processes data with up to two teraflops of computing power using NVIDIA 's CUDA architecture based on the C programming language .

    使用NvidiaCUDA并行计算技术的C1060数据处理能力可达2teraflops。

  10. Compute Unified Device Architecture ( CUDA ) programming model provides the programmers adequate C language like APIs to better exploit the parallel power of the GPU .

    另外,基于GPU的CUDA编程模型为程序员提供了充足的类似于C语言的API,便于程序员发挥GPU的并行计算能力。

  11. A particle simulation code based on CPU and a parallel one , which base on GPU by using the Compute Unified Device Architecture ( CUDA ), has been developed .

    分别开发了基于CPU的粒子模拟系统和采用计算设备统一架构(CUDA)技术基于GPU的并行粒子模拟系统。

  12. According to the MPI + CUDA parallel structure characteristics , task parallel is added to the charge distribution algorithm based on the data parallel , to improve parallelism of the whole task .

    并根据MPI+CUDA异构并行架构的特点,对电荷分布算法在数据并行的基础上加入了任务并行,提高了整个任务的并行度。

  13. The specific contents include the following : 1 . The absorption module backup material ; 2 . The CUDAtechniques ; 3 . The achievement of absorption module with CUDA ;

    具体内容包括:1.吸收模块背景技术资料;2.CUDA并行编程技术;3.使用CUDA技术对吸收模块并行化改进的具体实现。

  14. Huang declined to share specifics regarding Apple 's intentions , but a conference of Mac developers would be a likely place to discuss any plans Apple might have for CUDA .

    考虑到苹果公司的意图,黄仁勋没有透露更多的细节,但是一个Mac开发者会以也许是一个比较合适讨论苹果计划采用CUDA技术的地方。

  15. Apple 's implementation " won 't be called CUDA , but it will be called something else ," Huang said in an interview here at Nvidia 's headquarters on Wednesday .

    “苹果也许不会把它叫做CUDA”周三黄仁勋在Nvidia总部的一次采访中这样说到。

  16. Any GPU device has a device driver , so targeting it makes more sense than generating CUDA or OpenCL code which would require from users to install other SDKs .

    所有GPU设备都有设备驱动,因此针对它来编程更合理,这样会比生成CUDA或者OpenGL的代码更好,因为那还需要用户安装其它的SDK。

  17. Particularly , after the nVIDIA company launched Compute Unified Device Architecture ( CUDA ) platform , GPU is able to solve complex computational problems quickly , so it has entered upon the high computing parallelism .

    特别是nVIDIA公司推出的面向通用计算CUDA开发平台后,GPU能够迅速解决复杂的计算难题,开始大举进军高并行性计算领域。

  18. " Apple knows a lot about CUDA ," Huang said , implying the company might be ready to formally embrace Nvidia 's technology to make it easier to exploit graphics chips inside Macs .

    他说“苹果很了解CUDA技术”,暗示苹果公司已经准备好正式在其电脑中加入Nvidia的该技术以便更好的发挥其显卡的作用。

  19. The parallel programming model that GPU cluster applied is MPI + CUDA . Firstly , the data of difference image will be assigned to each compute node in cluster , and then the assigned data are computed on GPU with parallel computing .

    基于GPU集群的并行化方法采用MPI+CUDA的编程模型,先将待聚类的差异影像图分配到集群的各个计算节点中,然后在计算节点中利用GPU进行并行计算。