GPU parallel program development using CUDA / (Record no. 30610)

MARC details
000 -LEADER
fixed length control field 02382nam a2200241Ia 4500
001 - CONTROL NUMBER
control field 41364
003 - CONTROL NUMBER IDENTIFIER
control field IN-BdCUP
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20230421155114.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 230413s2023 000 0 eng
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 9781498750752
040 ## - CATALOGING SOURCE
Language of cataloging eng
Transcribing agency IN-BdCUP
041 ## - LANGUAGE CODE
Language code of text/sound track or separate title eng
082 ## - DEWEY DECIMAL CLASSIFICATION NUMBER
Classification number 005.275
Item number SOY
100 ## - MAIN ENTRY--PERSONAL NAME
Personal name Soyata, Tolga
245 #0 - TITLE STATEMENT
Title GPU parallel program development using CUDA /
Statement of responsibility, etc. Soyata, Tolga
260 ## - PUBLICATION, DISTRIBUTION, ETC.
Place of publication, distribution, etc. Boca Raton :
Name of publisher, distributor, etc. CRC Press,
Date of publication, distribution, etc. 2018.
300 ## - PHYSICAL DESCRIPTION
Extent xxxv, 440 p. ;
Dimensions 25 cm.
520 ## - SUMMARY, ETC.
Summary, etc. GPU Parallel Program Development using CUDA teaches GPU programming by showing the differences among different families of GPUs. This approach prepares the reader for the next generation and future generations of GPUs. The book emphasizes concepts that will remain relevant for a long time, rather than concepts that are platform-specific. At the same time, the book also provides platform-dependent explanations that are as valuable as generalized GPU concepts. The book consists of three separate parts; it starts by explaining parallelism using CPU multi-threading in Part I. A few simple programs are used to demonstrate the concept of dividing a large task into multiple parallel sub-tasks and mapping them to CPU threads. Multiple ways of parallelizing the same task are analyzed and their pros/cons are studied in terms of both core and memory operation. Part II of the book introduces GPU massive parallelism. The same programs are parallelized on multiple Nvidia GPU platforms and the same performance analysis is repeated. Because the core and memory structures of CPUs and GPUs are different, the results differ in interesting ways. The end goal is to make programmers aware of all the good ideas, as well as the bad ideas, so readers can apply the good ideas and avoid the bad ideas in their own programs. Part III of the book provides pointer for readers who want to expand their horizons. It provides a brief introduction to popular CUDA libraries (such as cuBLAS, cuFFT, NPP, and Thrust), the OpenCL programming language, an overview of GPU programming using other programming languages and API libraries (such as Python, OpenCV, OpenGL, and Apple's Swift and Metal, ) and the deep learning library cuDNN.
650 ## - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name entry element Parallel programming
Topical term or geographic name entry element CUDA (Computer architecture)
Topical term or geographic name entry element GPU parallel program development using CUDA
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme Dewey Decimal Classification
Koha item type Book
Holdings
Withdrawn status Lost status Source of classification or shelving scheme Damaged status Not for loan Home library Current library Date acquired Source of acquisition Cost, normal purchase price Bill number Total checkouts Full call number Barcode Date last seen Actual Cost, replacement price Bill Date Koha item type
    Dewey Decimal Classification     Ranganathan Library Ranganathan Library 16/02/2020 Technical Books Bureau(India), New Delhi 5244.02 TB2183   005.275 SOY 037921 13/04/2023 3408.61 10/02/2020 Book
This system is made operational by the in-house staff of the CUP Library.