Join our team virtually at #MSBuild May 24-26, and learn more about the latest technologies, tools, and techniques for #developers and #datascientists to take #AI to production faster.
Preview our content line-up:
nvda.ws/3FL5oR3
We are excited to publish the Linux GPU kernel modules as #opensource with dual GPL/MIT license, starting with the R515 driver release.
You can find the source code for these kernel modules in the NVIDIA Open GPU Kernel Modules repo on GitHub. nvda.ws/3swPGDO
The NVIDIA platform is the most mature and complete platform for accelerated computing. Explore the simplest, most productive, and most portable approach to accelerated computing as we discuss three approaches that you can take for programming GPUs.
Watch as we demonstrate state-of-the-art in writing application code that is parallel and ready to run on GPUs, CPUs, and more, using only C++, Fortran, and Python. See what’s possible and learn best practices in writing parallel code with standard language parallelism.
Deep dive into the latest developments in NVIDIA software for #HPC applications, including a comprehensive look at what’s new in programming models, compilers, libraries, and tools. Watch Now.
Discover how @SiemensGamesa can quickly perform physics-informed super-resolution simulations that accurately reflect real-world performance using NVIDIA Modulus physics-ML models and @nvidiaomniverse. Watch now. nvda.ws/3P6tgmp
Join us at #ISC22 - check out our special address and keynote, swing by our partner booths, and view demos and content on our website. See how NVIDIA is #TransformingTheFuture.
The latest release of #NVIDIADOCA offers new features and enhancements enabling developers to rapidly develop #DPU applications to optimize their data center infrastructure. Learn more about the release. nvda.ws/3MBGxRW
NVIDIA Modulus is now available on NVIDIA GPU Cloud (NGC) Container for #AI developers worldwide. Developers can start with fully optimized neural network framework that blends the power of physics with #AI to build more robust models for better analysis. nvda.ws/3s7JZfa
What have we learned so far from the #GPU-accelerated NERSC Perlmutter supercomputer's fast paced leap into AI-for-science?
See how NERSC has been ramping up its mixed #AI - #HPC workload capabilities. nvda.ws/3MG0s22
cuTENSOR is now able to distribute tensor contractions across multiple GPUs. This has been released as a new library called cuTENSORMg (multi-GPU). Read all about it here!
Take a look into #FourCast, NVIDIA’s GPU-accelerated AI-enabled digital twin of Earth. Learn how users can generate, visualize, and explore potential weather outcomes interactively with #AI and @nvidiaomniverse
🏆 🏆 Congrats to Turing Award Winner Jack Dongarra!
We thank you for all your pioneering work in computational software which has paved the way for today's era o#HPCPC an#AIAI. Read about all of his accomplishmentsnvda.ws/3JZBMknnD1
Check it out! Qiskit can now utilize NVIDIA’s cuQuantum software development kit to help accelerate quantum simulations on classical computers. Learn more with a demo on the Qiskit Blog here: qisk.it/3qrA5Vb
Congrats to our collaborators at @nvidia, who continue to push the state of the art on quantum circuit compilation, simulation, and more — cuQuantum and nvq++ are amazing additions to our already-rich ecosystem of tools! blogs.nvidia.com/blog/2022/03/2…
Take a look at our rapid-fire overview of several new features we’ve recently added to our CUDA profiling, debugging, and IDE integration tools. Watch Now. #GTC22
Check out NVIDIA Clara Holoscan and see how @UCBerkeley researchers are using Holoscan and #AI to automatically detect biological events and auto focus the microscope while the experiment is running. Watch Now.
Here’s a peek into #FourCast, NVIDIA’s GPU-accelerated AI-enabled digital twin of Earth. Watch as this emulates the dynamics of global weather patterns and predicts extreme weather events with unprecedented speed and accuracy. #GTC22
Check out NVIDIA’s Senior Product Manager, @sam_stanwyck as a guest speaker on the @QubitGuy podcast. Listen now to hear them discuss all things quantum computing and more.
How does the new DGX cuQuantum Appliance benchmark against other quantum circuit simulation options on the market today? Find out in this #GTC22 quantum-focused session beginning shortly. nvda.ws/3utO6CC
#GTC22 was full of software releases, updates, and learning resources. From cuQuantum to Nsight Graphics, check out this recap of our top releases from this week.
Get a deeper dive on keynote news for Sionna, an open source library for accelerated 6G physical layer research, by watching this #GTC22 session [S41760]. Join us today at 10 a.m. PDT online. nvda.ws/3Jy8l8R
NVIDIA Modulus has created physics-informed #digitaltwins of wind farms and power plants, as well as being utilized for prognosis and health management. Learn more about this neural network framework. #GTC22
We shared only a few details about #NVIDIAHopper in our #GTC22 keynote, now join our lead GPU architects for a deeper and more technical dive starting at 10 AM PDT.
➡️Inside the NVIDIA Hopper Architecture [S42663] nvda.ws/3JxtpMrtwitter.com/NVIDIAGTC/stat…
Read how @SiemensGamesa is using NVIDIA Modulus and @nvidiaomniverse to create physics-informed digital twins of wind farms, enabling CFD #simulations 4000X faster than traditional methods. #GTC22
NVIDIA cuQuantum is now generally available, along with an expanding quantum computing ecosystem and collaborations with @IBM, @ORNL, @XanaduAI, @pasqalio, and others to build tomorrow’s most powerful systems. #GTC22
NVIDIA cuQuantum is now generally available along with an expanding quantum computing ecosystem
and collaborations with industry leaders to build tomorrow’s most powerful systems. #GTC22nvda.ws/36FF8u4
Announcing a new scientific digital twin platform to solve million-x scale #science and engineering problems faster than ever - using NVIDIA Modulus with physics-informed #AI and @nvidiaomniverse. #GTC22
Announcing the NVIDIA DGX H100 Systems, the world's most advanced enterprise #AI infrastructure built with the new NVIDIA H100 Tensor Core GPUs. #GTC22nvda.ws/3JvZRiw
Learn about DPX instructions - a new instruction set built into NVIDIA H100 GPUs that will help developers write code to achieve up to 40x speedups on dynamic programming algorithms and boost workflows for disease diagnosis, quantum simulation, and more. nvda.ws/3tw8yUi
Learn how the NVIDIA H100's Transformer Engine, part of the new Hopper architecture, speeds up AI workloads by delivering up to 6x higher performance without losing accuracy. #GTC22nvda.ws/3DgEhfH
Announcing the #NVIDIAHopper architecture - the next generation of accelerated computing that securely scales diverse workloads in every data center. #GTC22nvda.ws/36hdcg9
When building your #AI application wouldn’t it be easier if you could develop once and deploy anywhere without code change? It’s now possible with NVIDIA’s VMI. Watch our demo to learn more: nvda.ws/3CWLNvY
GPU computing has been revolutionizing the landscape for physics-based simulation in recent years. Find out how @ANSYS extended its GPU solver in Ansys Discovery to support computations on multiple GPUs and made it available to flagship Fluent users. #GTC22