10-13 March 2025
Sands Expo and Convention Centre
Marina Bay Sands, Singapore

FULL PROGRAMME

{‘General’: ‘🟢’,’Tutorial’: ‘🔵’,’Workshop’: ‘🔴’,’Plenary’: ‘🟣’,’Breakout’: ‘🟡’}
CFilter
DFilter

Location: Melati Ballroom Foyer (Level 4) 

Registration begins from 08:00am.

Location: Room P11 – Peony Jr 4511 (Level 4)

Abstract: The industry is experiencing a resurgence of interest in diverse computing architectures, with Arm technology leading this transformation. NVIDIA’s GH200 Grace Hopper™ Superchip exemplifies this trend, combining the NVIDIA Hopper GPU with the Grace CPU, featuring 72 high-performance Armv9 cores on a single die. This integration delivers competitive FP64 TFlops performance and up to 500GB/s memory bandwidth, all while maintaining industry-leading power efficiency. The tutorial aims to unlock the full potential of the Grace CPU and Grace Hopper GH200 Superchip for scientific computing. Experts will guide attendees through the process of compiling, executing, profiling, and optimizing code for Arm architecture, dispelling the notion that changing CPU architectures is challenging. The session will also demonstrate how to leverage the GH200’s unique architecture using various programming models, supported by practical examples that attendees can replicate. To facilitate hands-on learning, remote access to NVIDIA GH200 will be provided.

Important Notes to Participants: Please bring your laptop to fully participate in the interactive portions of the workshop.

Pre-requisites: No previous knowledge or experience on Arm-based systems is needed.

For any enquiries, please contact: SCA25_NV_GraceHopper@nvidia.com

Programme:

TimeSession
09:30am – 09:45amSession 1: Registration, Logistics and Welcome
– Filippo Spiga, Dr Gabriel Noaje

09:45am – 10:30amSession 2: NVIDIA Grace™, NVIDIA GH200 Grace Hopper™ Superchip, and NVIDIA GB200 Grace™ Blackwell Superchip Products and Platforms
– Gabriel Noaje

10:30am – 11:00amTea Break

11:00am – 11:30amThe Fastest and Energy Efficient Supercomputers – HPE Cray EX Supercomputer featuring NVIDIA GH200 Grace Hopper Superchip
– De-Iou Tsai

11:30am – 12:00amSession 3: Hardware Deep Dive
– Filippo Spiga

12:00am – 12:30pmSession 4: CPU Software Deep Dive 
– Filippo Spiga

12:30pm – 01:30pmLunch

01:30pm – 02:00pmSession 5: Programming Models Deep Dive
– Filippo Spiga

2:00pm – 02:30pmNVIDIA Grace Hopper (GH200) Superchip Customer Experience
– Dr Pascal Jahan Elahi

02:30pm – 03:00pmSession 6: Getting Ready for Hands-On + Live Demo
– Filippo Spiga

03:00pm – 03:30pmTea Break

03:30pm – 04:45pmNVIDIA Grace Hopper (GH200) Superchip Participants’ Hands-on / Bring Your Own Code
– Filippo Spiga, Dr Gabriel Noaje, Dr Wei Fang

04:45pm – 05:00pmWrap Up and Q&A
Filippo Spiga, Dr Gabriel Noaje

HPC

Location: Room O6 – Orchid Jr 4312 (Level 4)

Abstract: There is little doubt that we have entered an era where digital data underpins modern science and wider research endeavours. To support this, numerous infrastructures have been designed and built to store these data, ranging from proprietary on-premises systems through to commercial clouds and hybrids of both. Such implementations provide a range of functions during the research lifecycle, from provisioning and cataloguing data assets through to storing and presenting data to computing platforms.

These require advanced data infrastructures which can respond to increasing demands for high performance and scale, as well as support rich access models to increasingly complex data.

Importantly, research data workflows are different to traditional enterprise patterns of data movement, transactions, and growth. Conventional corporate storage systems are typically not fit-for-purpose as research storage systems from both a performance and business process perspective.

To date, the implementation of research data platforms has largely advanced in an ad-hoc way, often driven by the urgent need to deliver operational infrastructure within a constrained budget and sometimes driven more by what is available in the market than what would provide a powerful, flexible, and extensible system.

In recent work we have proposed and documented an abstract Research Data Reference Architecture (RDRA) which serves as a framework for guiding and classifying real world systems. This workshop is designed to both build a record of real research data infrastructures and measure them against the RDRA. This will both validate the RDRA and provide a practical resource for implementers.

This workshop is a continuation of a very successful AeRO Forum held at SCA 2024.

Workshop URL: https://davidabramson.org/documenting-and-classifying-research-data-infrastructure/

Programme:

TimeSession
09:00am – 09:20amThe RDRA and RDIA summary; workshop structure

– Professor David Abramson, University of Queensland

09:20am – 09:40amSpeaker #1

– Frank K Wuerthwein, SDSC/UCSD

09:40am – 10:00amSpeaker #2

– Beth Holtz, Princeton/TigerData

10:00am – 10:20amSpeaker #3

– John Westlund, LLNL

10:30am – 11:00amMorning Tea Break

11:00am – 11:20amSpeaker #4

– Osamu Tatebe, Tsukba

11:20am – 11:40amSpeaker #5

– Jake Carroll, UQ

11:40am – 12:00pmSpeaker #6

– Benjamin Wu/Delegate, NetApp

12:00pm – 12:20pmSpeaker #7

– Chris Maestas, IBM

12:30pm – 01:30pmLunch

01:30pm – 01:50pmCommentary thus far

– David Abramson

01:50pm – 02:10pmSpeaker #8

– Leslie Almberg, Arcitecta/UoM

02:10pm – 02:30pmSpeaker #9

– Jeffrey Tay, VAST

02:30pm – 02:50pmSpeaker #10

– Werner Scholz, Xenon

03:00pm – 03:30pmTea Break

03:30pm – 03:50pmSpeaker #11

– Luc Betbeder-Matibet, UNSW

03:50pm – 04:10pmSpeaker #12

– Ikki Fujiwara, NII

04:10pm – 04:30pmSpeaker #13

– Chris Schlipalius, Pawsey

04:30pm – 04:50pmSpeaker #14

– Rachana Anathakrishnan, Globus/UoC

04:50pm – 05:10pmSpeaker #15

– Paul Hiew, NSCC

05:10pm – 05:30pmSpeaker #16

– Shinji Kikuchi, Riken

05:30pm – 06:00pmSummary discussion, thank you to all participants, wrap up and next steps

– David Abramson, Jake Caroll, UQ
HPC

Location: Room O7 – Orchid Jr 4311 (Level 4)

Abstract: At the very least in the coming years, quantum computers are viewed mainly as a complementary computational resource, dedicated devices to supplement existing High Performance Computing (HPC) capabilities. For this reason, the deployment and integration of Quantum Processing Units (QPU) is well underway for various local and global players. The question is therefore what can we do with the current generation of QPUs, what gain, utility can come out from enriching an existing workflow with such devices.

For this reason, we propose a session to provide an overview on where current trends stand within the development of hybrid quantum-classical methods, that show a lot of promise in near term applications. The focus will be how said method(s) can be tailored and implemented for an efficient execution in an HPC environment. Various software tools and integration methods will be explored and illustrated through the example of Pasqal’s neutral atoms technology, focusing on a dedicated use case investigation from an actual industrial problem.

Workshop URL (For software tools):

Quantum Computing

Location: Room P8 – Peony Jr 4412 (Level 4)

Abstract: Large-scale ’foundation’ AI models show great promise for scientific discovery, with promising results being obtained in areas ranging from self-driving laboratories to hypothesis generation. But realizing this promise at scale will require unprecedented quantities of both computation to train models and multidisciplinary human effort to prepare diverse scientific data for use in model training and to construct evaluation suites to guide development. Only a small number of organizations have the resources to build models at state-of-the-art scales (e.g., trillions of parameters, trained using tens of trillions of tokens). This reality is already motivating the formation of multi-institutional teams to work together on model architecture, evaluation, and training as well as on the collaborative building and sharing of high-quality training data sets. This workshop will highlight such collaborations, which are being catalyzed by the international Trillion Parameter Consortium (TPC). The workshop will highlight progress in various aspects of generative AI for science and engineering with presentations from academics, national laboratories, HPC centers, industry, institutes, and leaders from funding agencies. The workshop will also introduce the structure and strategies of the TPC, with an overview of high-priority areas in which new collaborators can contribute and benefit from joining the consortium.

For any enquiries, please contact: mohamed.attia@riken.jp

Workshop URL: https://tpc.dev/tpc-workshop-at-sca-2025/

Programme:

TimeSession
09:00am – 09:10amOpening Remarks

– Jens Domke, RIKEN-CCS

09:10am – 09:45amInvited Talk #1

– Satoshi Matsuoka, RIKEN-CCS

09:45am – 10:20amInvited Talk #2

– Arvind Ramanathan, Argonne National Laboratory

10:20am – 10:50amTea Break



10:50am – 11:25amInvited Talk #3

– Speaker from Singapore (TBD)

11:25am – 12:00pmInvited Talk #4

– Speaker from Europe (TBD)

12:00pm – 01:30pmLunch

01:30pm – 02:00pmInvited Talk #5: Status and roadmap of the TPC

– Charles Catlett, Argonne National Laboratory

02:00pm – 02:20pmPresentation: Algebraic Approaches to Combining Multiple Large Language Models

– J. de Curtò, BSC-CNS

02:20pm – 02:40pmPresentation: MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models

– Shuo Sun, A*STAR

02:40pm – 03:00pm Presentation: Automated Detection of AI Training Jobs to Enhance Security In HPC Systems

– Francesco Antici, University of Bologna

03:00pm – 03:20pmTea Break

03:20pm – 03:40pmPresentation: Scientific Data Compression for Large Language Models

– Maximilian Sander, Technische Universität Dresden

03:40pm – 04:00pmPresentation: Advancing Autonomous Microscopy Agents with domain guided dynamic retrieval in a Virtual Foundation Model OS

– Gayathri Saranathan, Hewlett Packard Enterprise

04:00pm – 04:20pmClosing Remarks

– Mohamed Wahib, RIKEN CCS

Artificial Intelligence

Location: Room P10 – Peony Jr 4512 (Level 4)

Abstract: This workshop explores the seamless integration of classical and quantum resources for quantum-accelerated supercomputing. Featuring interactive presentations by Anyon Technologies, AWS, A*STAR IHPC, NVIDIA, Pawsey Supercomputing Centre, QuEra, and Quantinuum, participants will gain practical skills using the open-source, qubit-agnostic platform CUDA-Q. Demonstrations include real-world examples to develop, improve, and benchmark quantum-classical applications, and run workflows on the cloud, high-performance supercomputers, and various quantum computers.

The day begins with an overview of CUDA-Q, followed by sessions on QuEra’s neutral-atom technology and Pawsey’s hands-on presentation using NVIDIA GH200 Grace Hopper™ Superchips. Participants will then learn to submit from CUDA-Q to Quantinuum’s quantum computers, and to solve quantum chemistry problems with quantum phase estimation. Anyon and IHPC present a pulse-level emulator integrated with Qibolab and demonstrate an end-to-end workflow connecting quantum hardware with quantum software frameworks. The workshop concludes with a hands-on guide to using CUDA-Q with Amazon Braket.

For any enquiries, please contact: SCA25_NV_Quantum@nvidia.com

Important Notes to Participants: Please bring your laptop to fully participate in the interactive portions of the workshop.

Pre-requisites: Familiarity with quantum computing concepts and Python.

Programme:

TimeSession
09:00am – 09:30amHPE Talk with intro from NVIDIA

Towards heterogeneous quantum-classical supercomputing

– Dr Masoud Mohseni, HPE
– Dr Jin-Sung Kim, NVIDIA

09:30am – 10:30amQuEra

– Introduction to QuEra and Neutral-Atom Technology
– Analog Quantum Computing and Aquila
– Digital Quantum Computing with Neutral Atoms

– Dr Tommaso Macrì, Dr Jonathan Wurtz

10:30am -11:00amTea Break
11:00am – 12:30pmPawsey with examples from QuEra

– Introduction to Pawsey Supercomputing Research Centre
– Introduction to Pawsey’s Quantum Supercomputing Innovation Hub
– Introduction to Pawsey-QuEra partnership and projects
– Dive into Continuous Time Quantum Walks, hands-on example of running these problems in bloqade

– Dr Pascal Jahan Elahi

12:30pm – 01:30pmLunch
01:30pm – 02:30pmQuantinuum

– Introduction to Quantinuum and its integrated full-stack for quantum computing
– Introduction to Quantum Phase Estimation for solving quantum chemistry problems
– Demonstration of a QPE algorithm with CUDA-Q on Quantinuum H-series

– Dr Kentaro Yamamoto, Dr Enrico Rinaldi

02:30pm – 03:00pmAnyon and SDT Part 1

– Introduction of Anyon Computing/Anyon Technologies and related solid-state on-chip computing with Superconducting Quantum Circuits
– Hybrid Quantum Computing Cloud Service from Korea – QuREKA, and tutorial on QAOA with Fourier heuristics to solve a financial QUBO problem

– Dr Roger Luo, Mr Jiwon Yune, Mr Sunwoo Park

03:00pm – 03:30pmTea Break
03:30pm – 04:30pmAnyon and IHPC Part 2

– Enterprise applications – Quantum computing in Financial Services
– A*STAR/IHPC examples of hybrid computing for quantum applications and hardware development in collaboration with Anyon

– Dr Georgios Korpas, HSBC
– Dr Khoo Jun Yong, iHPC
– Mr Tan Kai Yong Andy, iHPC

04:30pm – 05:30pmAWS

– Introduction to Amazon Braket
– Getting started with NVIDIA CUDA-Q on Braket
– Hybrid Quantum Computing with Amazon Braket Hybrid Jobs
– Running CUDA-Q jobs on Braket-managed CPU, GPU, and QPU hardware

– Dr Tyler Takeshita, Dr Sebastian Stern

05:30pmClosing

Quantum Computing

Location: Room L1 – Lotus Jr (Level 4) 

Abstract: This quintessential DDN workshop will enable the attendees via –

  • Sven Oehme (Chief Technology Officer of DDN), along with DDN APJ executive leadership will share how they’re driving AI innovation at unprecedented speed, scale, and efficiency.
  • Get an exclusive look and hands-on DEMO of Infinia – DDN Data Intelligence Platform that deploys in minutes, reduces complexity, and accelerates model training 100x faster while using 75% less power.

Hear directly from NVIDIA and some of our customers on how they are using AI to solve complex challenges faster than ever before.

HPC

 Location: Room M2 – Melati Jr 4011 (Level 4)

Abstract: Digital Twins are accurate virtual replicas of real-world systems that combine sensor data with models to provide timely, beneficial information. Traditionally used in industry, the integration of high-performance computing (HPC), AI, and edge computing now allows digital twins to be applied in science and global policy. This includes areas like climate change, renewable energy, and healthcare, where real-time AI and uncertainty quantification can assist in life-saving decisions. Accurate simulations are becoming the primary source of virtual data, syncing with the real world across various scales, from subatomic to interstellar. The evolution from basic 3D models to near-identical digital twins is crucial for advancements in fields such as computational biomedicine, nuclear fusion, and factory automation. This workshop aims to bring together experts to discuss the challenges and opportunities in making digital twins a standard practice in HPC and to identify key principles for their effective use.

For any enquiries, please contact: SCA25_NV_DigitalTwin@nvidia.com

Workshop URL: https://sites.google.com/view/sca25-digitaltwins-workshop/home

Programme:

TimeSession
09:00am – 09:30amDigital Twins for Science, NVIDIA

– Barton Fiske, NVIDIA

09:30am – 10:00amKeynote Address – From Classrooms to Virtual Worlds: The Comprehensive Potential of Digital Twins

– Dr Budianto Tandianus, SIT; Megani Rajendran, NVIDIA

10:00am – 10:30amCreating Seamless City-Scale Digital Twins: Leveraging NVIDIA Omniverse for Urban Development

– Kenneth Sung, Metason

10:30am – 11:00amTea Break

11:00am – 11:30amUsing Digital Twins to Combat Climate Change

– Jeff Adie, NVIDIA

11:30am – 11:45amLenovo & NVIDIA: Powering the Future with Digital Twins and Omniverse

– Sinisa Nikolic, Lenovo Asia Pacific

11:45am – 12:30pmPanel Discussion + Questions / Discussions

– Barton Fiske, Senior Alliances Manager, NVIDIA
– Dr Budianto Tandianus, Senior Professional Officer, Singapore Institute of Technology
– Jeff Adie, Principal Solutions Architect, NVIDIA
– Kenneth Sung, Managing Director, Metason Limited
– Meg Rajendran, Solutions Architect, NVIDIA
– Sinisa Nikolic, Director of HPC/AI and CSP, Lenovo Asia Pacific
– Sean Whiteley, Founder, Axomem.io and Member of Digital Twin Consortium

HPC

Location: Room O4 – Orchid Jr 4212 (Level 4)

Abstract: The complexity of scientific research calls for dynamic integration of various interconnected scientific instruments for data generation (e.g., observation,) and data analysis (e.g., visualization). The capability of near real-time data processing across interconnected scientific instruments is the foundation of various scientific workflows, including both traditional human-in-the-loop and autonomous ones. This is because analysis results are needed near real-time to provide time-sensitive decision-making and to steer experiments. However, as the improvement of scientific instruments leads to the generation of scientific data with unprecedented volumes and modalities, it imposes a significant strain on data processing as data acquisition, sharing, and analysis will be prohibitively time- and energy-consuming with the increase of data volumes. This landscape highlights the growing need for research efforts that focus on optimizing all stages of data processing at an extreme scale to enable near real-time processing, including but not limited to acquisition, reduction, management, storage, sharing, and analysis.

Workshop URL: https://sites.google.com/charlotte.edu/nrdpisi-2

Programme:

TimeSession
09:00am – 09:05amWelcome and Introduction

09:05am – 09:35amInvited Talk

– Dr Chin Guok, Chief Technology Officer, Planning and Innovation Group Lead, Energy Sciences Network

09:35am – 10:00amPaper Presentation

– Dr Norbert Podhorszki, Distinguished Researcher Scientist, Oak Ridge National Laboratory

10:00am – 10:30amInvited Talk

– Dr Scott Klasky, Distinguished Scientist, Group Leader in the CSM Division

10:30am – 10:45amTea Break

10:45am – 11:15amInvited Talk

– Dr Rachana Ananthakrishnan, Executive Director and Head of Products for Globus, University of Chicago

11:15am – 11:40amPaper Presentation

– Dr Justin Wozniak, Computer Scientist, Argonne National Laboratory

11:40am – 12:00pmInvited Talk

– Dr Gabriel Noaje, HPC Business Development Lead, NVIDIA

12:00pmClosing

Green Computing

 Location: Room O5 – Orchid Jr 4211 (Level 4)

Abstract: High-Performance Networking technologies are generating a lot of excitement towards building next generation High-End Computing (HEC) systems for HPC and AI with GPGPUs, accelerators, and Data Center Processing Units (DPUs), and a variety of application workloads.

This tutorial will provide an overview of these emerging technologies, their architectural features, current market standing, and suitability for designing HEC systems. It will start with a brief overview of IB, HSE, RoCE, and Omni-Path interconnect. An in-depth overview of the architectural features of these interconnects will be presented. It will be followed with an overview of the emerging NVLink, NVLink2, NVSwitch, EFA, and Slingshot architectures.

We will then present advanced features of commodity high-performance networks that enable performance and scalability. We will then provide an overview of enhanced offload capable network adapters like DPUs/IPUs (Smart NICs), their capabilities and features. Next, an overview of software stacks for high-performance networks like Open Fabrics Verbs, LibFabrics, and UCX comparing the performance of these stacks will be given. Next, challenges in designing MPI library for these interconnects, solutions and sample performance numbers will be presented.

For any enquiries, please contact: Panda, Dhabaleswar <panda@cse.ohio-state.edu>; Subramoni, Hari <subramoni.1@osu.edu>; Michalowicz, Benjamin <michalowicz.2@osu.edu>

Workshop URL: https://nowlab.cse.ohio-state.edu/tutorials/scasia25-hpn/

Agenda:

  • Trends in High-End Computing
  • Why High-Performance Networking for HPC and AI?
    • TCP vs User-level communication protocols
    • Requirements (communication, I/O, performance, cost, RAS) from the perspective of designing next generation high-end systems and scalable data centers
    • Communication Model and Semantics of High-Performance Networks
  • Communication Model and Semantics of High-Performance Networks
  • Architectural Overview of High-Performance Networks
    • IB, HSE, their Convergence and Features
    • Omni-Path Interconnect Architecture
    • NVLink and NVSwitch Interconnect Architecture
    • AMD Infinity Fabric Interconnect Architecture
    • Amazon EFA Interconnect Architecture
    • Cray Slingshot Interconnect Architecture
  • Overview of Emerging Smart Network Interfaces
    • Architectural features and principles of offloading
    • Acceleration capabilities for HPC and AI applications
  • High-Performance Network Deployments for AI Workloads
    • Overview and architectural features of Cerebras WSE
    • Overview and architectural features of Habana Gaudi
  • Overview of Software Stacks for Commodity High-Performance Networks
    • Vendors, Switches, and Host Channel Adapters
    • Overview of OpenFabrics Architecture and Convergence
    • Pointers to IB, Omni-Path, and HSE Installations
  • Sample Case Studies and Performance Numbers
  • Hands-on Exercises
    • Evaluating and understanding the performance of high-performance networks at the fabric level
    • Evaluating and understanding the performance of high-performance networks at the MPI level
  • Conclusions and Final Q&A, and Discussion

HPC

Location: Room P9 – Peony Jr 4411 (Level 4)

Abstract: Explore the future of AI with AMD’s expert-led, hands-on workshops.

Join Getting Started with AMD Ryzen AI PC to build and customize your own AI agent or attend AI Science Discovery Made Easier with ROCm and AMD Instinct Accelerators to deploy and optimize Large Language Models (LLMs) with vLLM and SGLang. Gain practical insights, live demos, and valuable expertise from industry specialists.

Choose to attend one or both workshops and enhance your AI knowledge.

RSVP now and attend the workshop to stand a chance to win a Poly Sync 20 Wireless Speakerphone or a Lenovo ThinkVision M14 Monitor!

Plus, the first 20 attendees per session receive an additional gift.

RSVP Todayhttps://forms.gle/V81xQhWXkVwuzVAx7

Programme:

TimeSession
10:00am – 12:00pmGetting Started with AMD Ryzen AI PC

With the help of AI PCs, people can enjoy better gaming experiences and smoother workflows. This sharing will help you explore the potential of Ryzen AI PCs and briefly describe their powerful applications, such as live demonstrations of object detection, LLM and diffusion-based deployment on Edge, ensuring participants gain practical insights into AI implementation.

The presenters will guide participants through a hands-on experience, designed to kick-start their journey towards mastering AI using state of art AMD AI PC technology. Attendees will have the unique opportunity to construct and customize their own AI Agent, helping everyone start their AI journey with AI PCs.

– Ms Sonya Yang, Software Development Engineer, AMD Research and Advanced Development (RAD), AMD University Program (AUP), Advanced Micro Devices, Inc (AMD)

02:00pm – 06:00pmAI science discovery made easier with ROCm and AMD Instinct Accelerators

The application of open-source software development ecosystems, coupled with the most capable GPU Accelerators, is a driving force behind the AI Mega trend fueling business, scientific research and exploration of new frontiers today.

We will introduce AMD ROCm and explore how to easily deploy Large Language Models (LLMs) with vLLM and SGLang, using popular examples now applied to scientific research domains. For example, Llama V3.x and Deepseek. In addition, we will look at tuning approaches to best optimize model training and inference.

Note: This will be an instructor lead workshop with demos, but no individual access to MI300X GPUs. However, as there will be references to online resources and model repositories, you are encouraged to bring along a laptop to follow along and explore the live material. Documentation including the presentation and demos shown will be provided as handouts at the workshop. A basic knowledge of Pytorch, Docker Containers, AI training and AI Inference is expected for maximum benefit and understanding.

AMD is planning a follow-up workshop in Singapore, featuring hands-on lab exercises with AMD MI300X GPUs after SCAsia 2025. The exact date is yet to be confirmed. Visit the AMD booth #C4 at SCA to register your interest.


– Mr Greg Oakes, Senior HPC and Artificial Intelligence Specialist, AMD Global HPC and AI Centre of Excellence, Advanced Micro Devices, Inc (AMD)

Location: Bayview Foyer (Level 4)

Note: Lunch is not included in the programme for 10 Mar, but there are plenty of great dining options nearby for you to enjoy!

View Marina Bay Sands Dining Directory!

Location: Room L1 – Lotus Jr (Level 4)

Abstract: Organizations of all sizes are looking for AI infrastructure solutions to accelerate their generative AI initiatives. The rapid growth of AI is driving a massive increase in computing power and network speeds, creating high demands on storage. While NVIDIA GPUs offer scalable and efficient computing power, they need fast access to data. To solve this, NVIDIA and WEKA have partnered to create a high-performance, scalable AI solution for everyone. WEKA has launched WARRP, a flexible, infrastructure-independent blueprint for deploying high-performance Retrieval-Augmented Generation (RAG) applications. WARRP is built for scalability and efficiency, easily integrating with major cloud platforms like AWS and tools like NVIDIA NIMs and Kubernetes. This workshop will dive into the architecture and how it provides excellent linear scaling for training workloads, using WARRP with the WEKA and NVIDIA ecosystem both on-premises and in AWS with SageMaker HyperPod.

Artificial Intelligence

Location: Room M2 – Melati Jr 4011 (Level 4)

Abstract: Mat3ra.com offers a cloud-native digital platform to run large-scale HPC tasks with a particular focus on materials modeling. Our platform empowers researchers, scientists, and engineers by providing an intuitive web interface that simplifies complex HPC workflows while offering robust collaboration capabilities. Whether you prefer tried-and-tested command line access or programmable API integration, Mat3ra.com offers tailored solutions for materials science research and development across academia and industry. In this half-day workshop, we will showcase various advantages of the Mat3ra.com. Our focus will span from Density Functional Theory simulations to the application of machine learning in molecular dynamics. Participants will engage in extensive hands-on sessions. Additionally, we will highlight the scalability of our cloud infrastructure, enabling researchers to push the boundaries of what is possible in materials science research. Please join us to explore how Mat3ra.com can revolutionize your approach to materials science research and unlock new possibilities for innovation.

In this half-day workshop at the SupercomputingAsia 2025 conference, we will showcase the advantages of the Mat3ra.com platform over traditional HPC setups. Our focus will span from Density Functional Theory (DFT) simulations to the application of machine learning in molecular dynamics. Participants will engage in extensive hands-on tutorials to learn about various DFT methodologies, including bandstructure analysis, density of states calculations, spin-magnetism studies, spin-orbit coupling effects, phonon dispersion relationships, and advanced molecular dynamics simulations. Additionally, we will highlight the scalability of our cloud infrastructure, showcasing how it can accommodate task-intensive HPC workflows. Practical examples will illustrate the platform’s capacity to handle large datasets and complex computations efficiently, enabling researchers to push the boundaries of what is possible in materials science. Please join us to explore how Mat3ra.com can revolutionize your approach to materials science research and unlock new possibilities for innovation.

Important Notes:

  1. Participants must bring their laptop/computer.
  2. Participants need to purchase suitable ticket with access to the workshops and register for the conference at  https://sca25.sc-asia.org
  3. For up-to-date information on the workshop, please also register at the workshop page at https://www.mat3ra.com/events-posts/sca-2025

Workshop URL:

HPC

 Location: Room O5 – Orchid Jr 4211 (Level 4)

Abstract: The Workshop Advancing Energy and Resource Efficiency in Data Centers addresses the pressing environmental and operational challenges posed by the rising energy demands of modern data centers. These facilities are critical to supporting a wide range of applications, including cloud services, high-performance computing (HPC), and increasingly, artificial intelligence (AI) workloads. AI, in particular, presents a unique challenge due to its immense computational requirements, especially for GPU-intensive tasks, which significantly increase energy consumption and strain existing infrastructure.

The workshop will explore both hardware and software innovations aimed at improving energy and resource efficiency in data centers. It will focus on performance-per-watt technologies, system and infrastructure optimizations, and machine learning-driven energy consumption regulation. Special attention will be given to how data centers can effectively integrate AI workloads into established HPC systems without overwhelming energy resources. Topics will include smart scheduling algorithms, green coding practices, and scalable architectures that balance the power needs of AI with other computing services.

In addition to tackling AI’s energy demands, the workshop will highlight advances in cooling strategies, waste heat reuse, and hybrid computing infrastructures, all aimed at enhancing overall efficiency. The discussions will also cover assessment models for resource conservation and flexible solutions that can adapt to the evolving needs of the hybrid environments of AI and traditional HPC.

Through presentations, case studies, and collaborative discussions, the workshop seeks to bridge current practices with cutting-edge research, offering solutions that reduce the environmental impact of AI and HPC while ensuring data centers remain sustainable and economically viable. By integrating innovations across hardware, software, and operational strategies, the workshop aims to catalyze the development of energy-efficient, scalable systems that support the future growth of the global computing infrastructure.

Workshop URL: https://ee-workshop.for.lrz.de

Green Computing

Location: Bayview Foyer (Level 4)

Join us for an exclusive evening, where industry leaders and prominent speakers come together to connect, exchange ideas, and set the stage for SCA2025! As a special invite-only gathering, this reception brings together ACM ASEAN HPC School delegates and SCA2025 speakers for an exciting kickstart to the conference—offering a unique opportunity to network with fellow innovators, industry leaders, and HPC pioneers.

Location: Melati Ballroom Foyer (Level 4) 

Registration begins from 08:00am.

Location: Melati Ballroom (Level 4)

Location: Melati Ballroom (Level 4)

Guest-of-Honour: Mrs Josephine Teo, Minister for Digital Development and Information

Location: Melati Ballroom (Level 4)

Location: Orchid Ballroom (Level 4)

Location: Poster Presentations / Delegate Lounge, Melati Jr Room 4010-4110 (Level 4) 

Poster Presentation Timeslots:

  • 11 Mar 12:00pm – 01:00pm
  • 11 Mar 05:00pm – 06:00pm

Winner for the SCA2025 Best Student Poster, will be announced at the SCA2025 Papers Breakout Track (Room – P5, Peony Jr) on Wednesday, 12 March 2025, at 03:35pm.

Location: Melati Ballroom (Level 4)

Abstract: In this talk, we examine how high-performance computing has changed over the last 10 years and look toward the future regarding trends. These changes have had and will continue to impact our software significantly. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile–time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run–time environment variability will make these problems much harder.

Mixed precision numerical methods are paramount for increasing the throughput of traditional and artificial intelligence (AI) workloads beyond riding the wave of the hardware alone. Reducing precision comes at the price of trading away some accuracy for performance (reckless behavior) but in noncritical segments of the workflow (responsible behavior) so that the accuracy requirements of the application can still be satisfied.

HPC

Location: Melati Ballroom (Level 4)

Abstract: Quantum Computing is no longer a distant promise; it has arrived and is poised to revolutionize several economies.

In conjunction with AI, quantum computing is unlocking use cases that were once beyond reach. This keynote will describe how Quantinuum’s approach to Quantum Generative AI is driving breakthroughs in applications which hold significant relevance for Singapore, in fields like chemistry, computational biology, and finance. 


Additionally, we’ll discuss the challenges and opportunities of adopting quantum solutions from both technical and business perspectives, emphasizing the importance of collaboration to build quantum applications that integrate the best of quantum and AI.

Artificial Intelligence
Quantum Computing

Location: Melati Ballroom (Level 4)

Abstract: Supercomputers are among humanity’s most vital instruments, driving scientific breakthroughs and expanding the frontiers of knowledge. AI is reinventing computing and sparks a new industrial revolution. This talk will explore the ongoing impact of the accelerated computing revolution and project the future influence of AI. It all requires top-to-bottom optimization from chip silicon, interconnects, system design, data center design, and a full platform stack. We will also highlight how AI factories are becoming key drivers of this transformation, setting the stage for unprecedented advancements and societal changes.

Artificial Intelligence

Location: Melati Ballroom (Level 4)

Abstract: AI has rapidly emerged in scientific fields to revolutionize research, but to make an impact with robust and reliable models, AI requires specialized infrastructure. In this plenary session, we will explore how supercomputing has become foundational to deliver significant compute performance and strong scaling capabilities to support a new generation of AI training requirements. Hear how organizations are turning to fully integrated high-performance computing (HPC) systems to support their AI-driven research to increase predictability, accelerate discovery and solve some of humanity’s most complex challenges.

Artificial Intelligence

Location: Melati Ballroom (Level 4)

Abstract: The future of high-performance computing is being redefined, driven by AI, data-intensive workloads, and the need for sustainable innovation. As leaders of the field, we will explore how Lenovo is at the forefront of designing the supercomputer of the future, pushing the boundaries of architecture, chassis design, and infrastructure innovation to deliver next-generation performance. Gain insights into how Lenovo is reinventing HPC architecture to optimize efficiency, cooling, and scalability while ensuring investment protection for customers.

Location: Bayview Foyer (Level 4)

Location: Room M2 – Melati Jr 4011/4111 (Level 4)

Abstract: The annual HPC Centre Leaders Forum returns at SCA2025, continuing its tradition as a cornerstone of the conference. This highly anticipated session will feature prominent leaders from national HPC centres, who will share updates on infrastructure developments, highlight recent breakthroughs in HPC research, and discuss upcoming regional and international collaborations.

Track Chair: Mr Mark Stickells

[Invited Track]

Location: Room P6 – Peony Jr 4511-2 (Level 4)

Abstract: AI4Science in Singapore is a rapidly growing interdisciplinary field that integrates artificial intelligence (AI) with scientific research to accelerate discovery and innovation. Singapore’s strong investment in AI-driven research spans materials science, chemistry, biomedical sciences, and environmental sustainability. Key initiatives include AI-powered drug discovery, autonomous laboratories for materials synthesis, and machine learning models for climate and energy solutions. With leading institutions like A*STAR, NUS, NTU, and industry collaborations, Singapore is positioning itself as a global hub for AI-driven scientific breakthroughs, fostering innovation at the intersection of computation and experimentation.

track Co-chairs:

  • Dr Kedar Hippalgaonkar, Nanyang Technological University
  • Dr Sebastian Maurer-Stroh, Executive Director at Bioinformatics Institute, A*STAR

[Invited Track]

Programme:

TimeSession
01:30pmOpening

01:30pm – 02:10pmKeynote

– Prof Sir Kostya Novoselov, Director, Institute for Functional Intelligent Materials (I-FIM), National University of Singapore

02:10pm – 02:30pmAI for Science Gym: Training Bilinguals to Tokenize Complexity

– Assoc Prof Duane Loh, Associate Professor, Departments of Physics and Biological Sciences, Centre for Bioimaging Sciences, Data Science Institute, National University of Singapore

02:30pm – 02:50pmPredicting Structure-Property Relationships of Multi-component Alloys via Machine Learning

– Dr Teck Leong Tan, Department Director, Materials Science & Chemistry Dept., Institute of High Performance Computing, A*STAR

02:50pm – 03:10pmInvited

– Dr Juntao Yang, Solutions Architect(HPC/AI) – Nvidia AI Technology Center (NVAITC), NVIDIA

03:10pm – 03:30pmFrom Edge of Chaos to Intelligent Matter: MOF Platforms for Evolving, Brain-Inspired Computation

– Adj Prof Andrey Ustyuzhanin, Chief Science Officer, Constructor Tech

03:30pm – 04:00pmTea Break

04:00pm – 04:40pmDesigning Tomorrow‘s Therapeutics

– Prof Dr Gisbert Schneider, Professor, Computer-Assisted Drug Design, ETH Zurich

04:40pm – 05:10pmInvited

– Dr Fan Hao

05:10pm – 05:35pmInvited

– Dr Yinghua Yao, Research Scientist at CFAR, A*STAR

05:35pm – 06:00pmAI for Semiconductors

– Dr J Senthilnath, Research Scientist at I2R, A*STAR

06:00pmClosing

Location: Room O3 – Orchid Jr 4211-2 (Level 4)

Abstract: This track aims to foster collaboration and innovation in the Finance ecosystem by providing a dynamic platform to showcase achievements from partnerships with leading industry players. It offers a unique opportunity to connect Fintech companies with cutting-edge technology providers, facilitating meaningful exchanges and partnerships. The event underscores the importance of synergistic collaborations in driving technological advancements and scaling solutions within the Finance domain, making it an essential gathering for stakeholders committed to shaping the future of Fintech and Blockchain.

Track Chair: Dr Rick Goh, Director, Computing and Intelligence, IHPC, A*STAR

[Invited Track]

Programme:

TimeSession
01:30pm – 01:50pmOpening Remarks and Presentation

– Dr Rick Goh, Director, Computing and Intelligence, IHPC, A*STAR

01:50pm – 02:05pmGen AI and Personalized Client Engagement

– Sushil Anand, Head of Digital, Advisory and Managed Investments, Wealth and Retail Banking, Standard Chartered Bank

02:05pm – 02:20pmOptimizing and Benchmarking Open-Source LLMs for Conversational AI

– Donghao Huang, Vice President Research and Development, Mastercard

02:20pm – 02:35pmEnabling Emerging Technologies Adoption

Emerging Technologies are oftentimes a paradigm shift from the technologies of today. How do we bridge the gap, and understand where and when there is value to be extracted?

– Jorden Seet, Head of Emerging Technologies Engineering, OCBC

02:35pm – 02:50pm[Topic TBC]

– Andrew Marchen, GM, Payments Technology, M-Daq

02:50pm – 03:20pmPanel Discussion on “AI in Action: Transforming Financial Services from Operations to Customer Experience”

– Andrew Marchen (M-Daq)
– Jorden Seet (OCBC)
– Donghao Huang (Mastercard)
– Sushil Anand (SCB)
– Dr Rick Goh (Moderator)

03:30pm – 04:00pmTea Break

04:00pm – 04:15pmPitch Session

Accelerating Impact through Generative AI

– Kopal Agarwal, Co founder, Qatalyst

AI-Powered Crypto Crime Investigations

As Web3 grows, so do security threats. This session explores how AI-driven tools enhance crypto investigations, enabling real-time tracking and risk assessments. Patrick Kim shares insights from working with law enforcement and the future of AI in blockchain security.

– Patrick Kim, Founder, Uppsala Security

AI, Blockchain, and IP Rights: Turning Intellectual Property into an Investable Asset Class

Intellectual property (IP) is a valuable yet underutilized asset due to fragmented rights management and lack of structured valuation. Engram leverages AI to manage IP ownership and usage rights, while blockchain ensures secure, transparent transactions. By tokenizing IP, enabling fractional ownership, usage rights, and automating royalties, we bridge the gap between creators and investors, transforming IP into a liquid, investable financial asset.

– Eugene Liang, Product Lead, Engram

04:15pm – 04:30pmRegTech Use Of LLM To Map Regulatory Obligations

The promise of GenAI holds significant potential in Finance in navigating the intricate landscape of regulatory requirements and constant changes. Our project, using the FCA Handbook regulations, delves into the complexities of frameworks and revealed dense network cross-references, nuances in wordings and complexities like definitions and abbreviations. By leveraging large language models (LLMs) and our own knowledge base, we aimed to unlock efficiencies by automating the regulatory mapping to our risk management framework. Using LLMS in mapping obligations suggests that future regulatory compliance and change management will be shaped by effectively harnessing GenAI capabilities, leading to more uniform and simplified governance of risk management frameworks.

– Thorsten Neumann, Venture Building Lead, AI, Standard Chartered Bank

04:30pm – 04:45pmOpen Source & AI in Fintech: Driving Innovation and Scalability in Financial Services

– Cynthia Ding, Founding Partner, Yincubator /SEGA Ventures

04:45pm – 05:00pmHyper Scalable Networks with Real World Utilities

– Shawn Tham, CEO of QPin Labs Pte Ltd

05:00pm – 05:15pmUnlocking a New Paradigm with Crypto and AI

– Herbert Yang, GM of Asia, Dfinity

05:15pm – 05:45pmPanel Discussion on “Convergence Horizons: Navigating the Blockchain-GenAI-HPC Nexus for Venture Innovation”

– Rick Liang (Yincubator)
– Herbert Yang (Dfinity)
– Shawn Tham (QPin)
– Thorsten Neumann (SCV)
– Kopal Agarwal (Moderator)
05:45pm – 06:00pmNetworking

06:00pmClosing

Location: Room L1 – Lotus Jr (Level 4)

Track Chair: Mr Tommy Ng, NSCC

[Invited Track]

Programme:

TimeSession
01:30PM – 01:50PMAI-Driven Intelligent Agents for Healthcare: Enhancing Efficiency, Accuracy, and Automation

This talk introduces AI-driven intelligent agents that are transforming healthcare by enhancing decision-making, efficiency, and automation. Our Recommendation Agents predict ICD codes by leveraging patient history, while Summarization Agents extract key insights from complex clinical data. Abnormality Detection Agents identify subtle anomalies for early intervention, Robust ASR Agents improve medical transcription in noisy environments, and Code Generation Agents streamline software development. By reducing administrative burdens, improving accuracy, and optimizing patient care, these agents enhance healthcare operations. Our developments integrate state-of-the-art AI techniques, ensuring precision, adaptability, and reliability in real-world healthcare applications.

– Dr Robby Tan, Chief Scientist, AICS (ASUS Intelligent Cloud Services)

01:50PM – 02:10PMLenovo EveryScale™: Seamless Scalability
In today’s rapidly evolving digital landscape, organizations need scalable, efficient, and future-ready supercomputing solutions that can adapt to the demands of AI, HPC, and enterprise workloads. Lenovo EveryScale™ is redefining how businesses and research institutions scale their computing power—offering a modular, flexible, and high-performance architecture designed for the future. Learn how we design with for your purposes and understand the intricacies of building the high-compute power you need now, and for the future.

– Ms April Chen, HPC Product Manager, Lenovo

02:10PM – 02:30PMNVIDIA CUDA-Q: The Platform for Hybrid Quantum Application Development

Useful quantum computing of the future will inherently be hybrid with CPUs, GPUs, and QPUs working in tandem to solve the world’s most important problems. In this talk we’ll discuss how NVIDIA CUDA-Q™ is enabling the entire quantum ecosystem to build and leverage heterogeneous quantum-classical systems to accelerate quantum computing today.

– Dr Jin-Sung Kim, Developer Relations Manager, Quantum Computing, NVIDIA

02:30PM – 02:50PMThe Engines of Innovation: HPC, AI, and Quantum

Innovation has always leveraged computing — a lot of computing. Two decades ago, that meant applying the well-worn technology of HPC clusters, executing code that solved math equations using physics and theory. Today, AI has become a second engine of innovation, based on predicting outcomes, using massive input data sets. Tomorrow, perhaps literally, Quantum computing will become a third engine, based on real-world analog techniques using probabilities. These engines have three things in common: they are complex (so ease of use is critical), they are expensive (so efficiency is critical), and they are fragile (so mitigating faults is critical).

– Dr Bill Nitzberg, Chief Scientist, Altair

02:50PM – 03:10PM A Real-life Analysis of Checkpointing with PyTorch and the Megatron Language Model Framework by DDN

An Analysis of the performance and impact of I/Os in AI training checkpointing with PyTorch/Megatron-LM Framework

– Mr Heoh Chin-Fah, Head of Systems Engineering (ASEAN), DDN

03:10PM – 03:30PMIt’s DeepSeek’s World, and we’re all just living in it

DeepSeek has recently released their Open-Infra-Index, detailing how their groundbreaking V3 & R1 models were built, and now operate. Join WEKA’s head of AI product strategy Val Bercovici, as he discusses the implications for LLMs and LRMs, as well as the highly regarded infrastructure and disruptive economics of a world class AI service.

– Mr Valentin Bercovici, Head of AI Strategy & Open Source, WEKA

03:30PM – 04:00PMTea Break
Tea Break is served at the Exhibition Room (Orchid Ballroom).

04:00PM – 04:20PMHow HPE and NVIDIA helps organizations accelerate adoption of AI leveraging turnkey solutions to AI Factory at scale

HPE combines our expertise in delivering the easiest to use private cloud environments with our decades of expertise in building and supporting the world’s largest Supercomputing clusters and in collaboration with NVIDIA, HPE provides the widest range of solutions from turnkey solutions for organizations looking to take their first step in AI, to the largest scale systems for national labs, research and academic institutions who are ready to deploy AI at scale.

We will delve into the full stack of components required for an end to end AI infrastructure and how HPE leverages our unique IP in conjunction with NVIDIA’s full stack to provide the most integrated and scalable solutions.

– Mr Steve Tolnai, Chief Technologist, HPC and AI, Hewlett Packard Enterprise

04:20PM – 04:40PMConsiderations for Management of Hybrid HPC-AI-Quantum Cluster Infrastructures

Classic solutions for HPC cluster management reach their limits when trying to integrate different CPU and accelerator architectures, container workflows, resource intensive interactive sessions, AI workloads, and (in the future) quantum computing capabilities. Additional requirements for multi-tenancy and virtualisation as well as infrastructure-as-code approaches are not supported by many infrastructure management tools. In this presentation we will discuss novel approaches and best practices to address these complex requirements, and which new aspects need to be considered to ensure reliability, flexibility, security, performance, long-term support and ease-of-use. We will also review common solutions and introduce XENON’s own cluster management framework “XENON Cluster Stack”.

– Dr Werner Scholz, CTO and Head of R&D, XENON Systems

04:40PM – 05:00PMUnleash AI Power with Alibaba Cloud

Alibaba Cloud empowers you to unlock new possibilities, drive innovation, and achieve your goals through the transformative power of AI. Leverage cutting-edge technologies like Qwen, PAI, and OpenSearch to build and deploy AI solutions that deliver tangible business outcomes.

– Increase efficiency with AI-powered automation.
– Gain deeper insights with AI-driven analytics.
– Enhance customer experiences with personalized AI.
– Drive innovation and growth with AI-powered solutions.

Access a comprehensive suite of AI tools and services, backed by Alibaba Cloud’s robust infrastructure, flexible pricing, and expert support. Transform your business and unleash your potential with AI.

– Ms Kho Khai En, Senior Solutions Architect, Alibaba Cloud Singapore

05:00PM – 05:20PMScalable and Open AI & The HPC Platform

– Mr Yasuo Ishii, Fellow – RISC-V CPU Architecture, Tenstorrent

05:20PM – 05:40PMPurpose-building Architectures for Rack-scale HPC

You will understand overall Supermicro product portfolio.

– Mr Michael Wangsahardja, Senior Field Application Engineer, Super Micro Computer, Inc.

05:40PM – 06:00PMTransformative value of Quantum and AI: bringing meaningful insights for critical applications today

The ability to solve classically intractable problems defines the transformative value of quantum computing, offering new tools to redefine industries and address complex humanity challenges. Quantinuum’s hardware is leading the way in achieving early fault-tolerance, marking a significant step forward in computational capabilities. By integrating quantum technology with AI and high-performance computing, we are building systems designed to address real-world issues with efficiency, precision and scale. This approach empowers critical applications from hydrogen fuel cells and carbon capture to precision medicine, food security, and cybersecurity, providing meaningful insights at a commercial level today.

– Dr Elvira Shishenina, Senior Director of Strategic Initiatives, Quantinuum

Artificial Intelligence
HPC
Quantum Computing

Location: Room O4 – Orchid Jr 4311-2 (Level 4)

Abstract: Transitioning to the AI-Driven Scientific Discovery Era. AI is permeating every facet of technology, heralding a transformative era in scientific research, and presenting an unprecedented opportunity to enhance simulation accuracy and drastically reduce prediction times. From traditional computing paradigms to AI-accelerated models, this shift constitutes a new industrial revolution that is reshaping the industry. We will equip students with a cutting-edge programming model that integrates both HPC and AI, and bridge the gap between academic knowledge and state-of-the-art industry applications.

Track Chair: Mr Song Qingchun, HPC-AI Advisory Council, APAC

[Invited Track]

Programme:

TimeSession
02:00pm Opening

– Mr Qingchun Song, Chair of HPC-AI Advisory Council, Asia

02:00pm – 02:20pm The HPC-AI Advisory Council

HPC-AI Advisory Council is an organization with a vision to bridge the gap between the state of the art in HPC and AI and its practice. This is achieved through things such as workshops, student competitions, open computing laboratory and publishing best practices. The organization includes more than 450 member companies, universities, and research centers.

Computationally, HPC and AI computing is towards the use of increasing amounts of accelerated system, as is reflected in the Top500 list. From a simulation perspective, with the end of Dennard Scaling, focus is shifting towards combining traditional HPC simulations with AI-based algorithms to reach the next level of simulation capabilities. These trends will be briefly discussed.

– Dr Richard Graham, HPC|Scale Special Interest Group Chair, HPC-AI Advisory Council

02:20pm – 02:40pm HPC Software Development – Challenges and the Future

With new processors being introduced every one to two years, HPC software development is facing tremendous challenges to catch up and optimise for the latest architectures. A piece of software usually stays in use for a much longer period than the lifetime of a HPC system. On top of that, there are multiple competing parallel programming frameworks posting confusions to the software developers who are having hard time to pick a framework. In this talk, we walk through the current landscape of parallel programming. We share our experience and view in HPC software development, helping the audience to navigate through the challenges to the future.

– Mr Chung Shin Yee, NSCC Singapore

02:40pm – 03:00pm National Computational Infrastructure, Australia

The presentation will focus on the computational capabilities of Australian NCI’s HPC Cluster, Gadi and the skills development initiatives that are helping National and International researchers leverage the high-performance capabilities.

– Dr Abdullah Shaikh, Training Manager, NCI Australia

03:00pm – 03:20pm Generative AI for 3D Generation and Editing

Recent developments in Generative AI has witnessed remarkable success in synthesizing visual content, especially images and videos. In contrast, the quality of 3D generation still lags behind. In this talk, I will present several works that push the boundaries of 3D generation through efficient and scalable architecture designs. First, I will introduce how a structured 3D latent space enhances the capabilities of 3D diffusion models. Next, I will present SAR3D, an efficient framework for generating 3D objects using autoregressive next-scale prediction. Finally, I will demonstrate how we enable flexible user interaction in 3D content creation through point-dragging editing for 3D objects.

– Dr Pan Xingang, Assistant Professor, National Technological University

03:20pm – 03:30pm 2024 APAC HPC-AI Competition Award Ceremony

– Mr Qingchun Song, HPC-AI Advisory Council
– Dr Terence Hung, NSCC
– Dr Abdullah Shaikh, NCI

03:30pm – 04:00pmTea Break

04:00pm – 04:20pm VAST Data platform: The Data Platform for the AI Era

The VAST Data Platform introduces a groundbreaking approach to AI and data analytics infrastructure through its innovative Disaggregated Shared Everything (DASE) architecture. By unifying storage, database, and containerized compute into a single, scalable software platform, it addresses critical challenges in modern data center and cloud environments. The platform’s unique capability lies in its ability to seamlessly integrate unstructured and structured data using declarative functions, creating a global data-defined computing environment. This talk will explore the platform’s technical architecture, demonstrate its transformative potential through compelling use cases, and showcase real-world customer success stories that highlight its impact on AI and deep learning workflows.

– Dr James Chen, Senior Solutions Engineer, VAST

04:20pm – 04:40pm NVIDIA Modulus Accelerates AI4S Development

In this report, we will introduce NVIDIA Modulus, a physical machine learning development framework, and its application examples in computational fluid dynamics. NVIDIA Modulus is a physics-based, operator-learning environment designed to help scientists and engineers solve complex scientific and engineering problems. NVIDIA Modulus is an open-source framework for building, training, and fine-tuning physical machine learning models with a simple Python interface. NVIDIA Modulus uses neural networks to simulate physical systems that can be used for weather simulation, computational fluid dynamics, heat transfer, structural mechanics, molecular dynamics, and more.

– Dr Lyulin Kuang, Solution Architect, NVIDIA

04:40pm – 05:00pm The Blueprint for a Sustainable AI Factory

The presentation details Firmus Technologies’ state of the art AI Factory. Utilizing cutting-edge liquid immersion cooling technology, Firmus achieves a 45% improvement in energy efficiency and a 30% reduction in costs compared to traditional air methods. Our scalable solution supports retrofitting legacy data centers or building new, energy-dense facilities, aligning with Singapore’s Green Plan 2030. The platform powers AI factories with NVIDIA-certified H100/H200 GPU clusters, enabling high-performance AI model training while minimizing environmental impact.

– Dr Daniel Kearney, Chief Technology Officer, Firmus Technologies

05:00pm – 05:20pm Student Competition Experience Sharing

In this AI and scientific computing era, the demand of computation capability is growing drastically year after year. As a student, I’ll go through the experience of my HPC route. How I was trained, What resources should be provided when educating HPC courses, and the benefit of take part in student cluster event. And lastly, the reason why we should put more effort at colledge end to help student to catch on the trend of industry.

– Mr Jason Lin, Student, National TsingHua University

05:20pm – 05:40pm Introduction of Competition Achievements in HPC-AI 2024

Through our participation in multiple HPC-AI competitions, our team has gain invaluable insights from our seniors in large-scale HPC system. The competition provided us with access to state-of-the-art computing resources and interesting applications, enabling our hands-on experience with distributed system. We successfully implemented various optimization techniques and achieved significant performance improvements, particularly in areas of communication overhead reduction and memory utilization. Building upon this competition experience, we aim to further improve ourselves on understanding and creating novel approaches in AI system infrastructure, contributes to the broader field of HPC.

– Mr Ng Woon Yee; Mr Bryan Shan, Student, Nanyang Technological University

05:40pm – 06:00pm Growing Through HPC AI Competitions to Inspire Community Innovation

Over seven years, Thammasat University’s participation in HPC-AI competitions has transformed its team, deepening HPC expertise, accessing advanced supercomputing resources, and fostering collaborations. As a coach, competition successes opened doors to prestigious opportunities, such as ACM Summer Schools and EU ASEAN HPC Schools, reclaiming missed opportunities and building a robust international network. Competition achievements help in secure funding from Thailand’s National Research Council for the HPC Ignite project. This initiative bridges HPC education and industry, empowering Northern Thailand’s economic sectors with support from local stakeholders and ThaiSC. Thammasat University Lampang Campus has transitioned from competition-based learning to impactful, community-driven initiatives, showcasing the value of global collaboration and mentorship gained through HPC-AI Advisory Council competitions.

– Dr Worawan Diaz Carballo (Marurngsith), Assistant Professor, Thammasat University

06:00pm Closing

– Mr Pengzhi Zhu, Lab Manager, HPC-AI Advisory Council

Artificial Intelligence

Location: Room P5 – Peony Jr 4411-2 (Level 4)

Track Chair: Prof DK Panda

[Peer-Reviewed]

Programme:

TimeSession
01:30pm – 01:35pmOpening, AI & HPC

– Prof Dhabaleswar K (DK) Panda, Professor & University Distinguished Scholar

01:35pm – 02:15pmImproving the Efficiency of a Deep Reinforcement Learning-Based Power Management System for HPC Clusters Using Curriculum Learning

Powering down idle nodes in HPC systems can save energy, but improper shutdown timing may degrade the Quality of Service (QoS). We propose a Deep Reinforcement Learning (DRL) agent enhanced with Curriculum Learning (CL) to optimize node shutdown timing. Using Batsim-py, we compare various curriculum strategies, with the easy-to-hard approach achieving the best energy—consuming 3.73% less energy than the existing DRL agent and 4.66% less than the best timeout policy. Additionally, job waiting time is reduced by 9.24%. We also evaluate the model’s generality across diverse scenarios. These findings demonstrate the effectiveness of CL and DRL for HPC power management.

– Prof Muhammad Alfian Amrizal, Assistant Professor, Universitas Gadjah Mada

02:15pm – 02:55pmAnomaly Detection in Large-Scale Monitoring Systems using a Language Model

Anomaly Detection in large-scale monitoring systems, especially within high-performance computing (HPC), is a significant challenge because disruptions when the computer running can operations and reduce overall efficiency. We propose a novel framework called Anomaly Detection in Large‐Scale Monitoring Systems using a Language Model (AD‐LM). This framework uses a language-model-driven workflow for anomaly detection. Starting with, AD-LM applies BERTopic for topic modelling, which groups log entries into meaningful clusters, helping to expose patterns that indicate potential anomalies. The next technique, using a graph-based classification model identifies system failures by capturing key relationships within both HPC and large-scale logs. This framework supports high-speed processing and minimal memory usage—essential qualities in HPC settings. We evaluated AD-LM on three real-world log datasets (Hadoop Distributed File Systems, BlueGene/L, and Thunderbird) and achieved F1-scores of 0.995, 0.997, and 0.998, respectively—outstanding well-known anomaly-detection benchmarks with little overhead. Our findings confirm AD-LM’s effectiveness for real-time anomaly detection in HPC and large-scale scenarios, underscoring its robustness, adaptability, and efficient resource consumption.

– Mr Supasate Vorathammathorn, Research Assistant, King Mongkut’s University of Technology Thonburi

02:55pm – 03:35pmShould AI Optimize Your Code? A Comparative Study of Classical Optimizing Compilers Versus Current Large Language Models

This study aims to answer a fundamental question for the compiler community:”Can AI-driven models revolutionize the way we approach code optimization?”. This paper presents a comparative analysis between three classical optimizing compilers and two state-of-the-art Large Language Models, assessing their respective abilities and limitations in optimizing code for maximum efficiency. Additionally, we introduce a benchmark suite of challenging optimization patterns and an automatic mechanism for evaluating performance and correctness of the code generated by LLMs. We used three different prompting methodologies to assess the performance of the LLMs – Simple Instruction (IP), Detailed Instruction Prompting (DIP) and Chain of Thought (CoT).

– Mr Miguel Rosas, Ph.D. Candidate, University of Delaware

03:35pm – 04:00pmTea Break

04:00pm – 04:05pmPerformance Optimization, Tools, and Energy Efficiency

– Prof Yao Chen, Research Assistant Professor, National University of Singapore

04:05pm – 04:45pmTaming The Overhead of Hiding Samples in Deep Neural Network Training

Recent empirical evidence indicates that there are performance benefits associated with (1) using larger datasets during the training of deep neural networks (DNN), and (2) scaling to unprecedented dataset sizes for pre-training attention-based models. However, the downside of using large datasets is the increased cost of training and the pressure on non-compute sub-systems of supercomputers and clusters used for DNN training (e.g., the file system). In this work, we focus on reducing the total amount of training data samples while maintaining the accuracy level. Recent online \textit{sample hiding} approach proposed to dynamically hide the least-important samples in a dataset during the training process to reduce the total amount of computing and the training time, while maintaining the accuracy level. However, estimating the importance of samples leads to a non-trivial additional overhead. In this study, we propose an efficient mechanism to approximate the importance of samples for reducing such overhead. Empirical results on various datasets and models show that our proposed method (\textbf{ESH}) can remove most of the overhead, e.g., ESH reduces the total training time by up to 27.9\% compared to the baseline by hiding 28.8\% number of samples on average during the training.

– Dr Truong Thao Nguyen, Researcher, National Institute of Advanced Industrial Science and Technology (AIST), Japan

04:45pm – 05:25pmResearch and Development of Evaluation Tools on User Job Level Index of HPC Cluster

As supercomputing internet expands, the number of users grows rapidly, but proficiency varies across disciplines. To enhance user capabilities and optimize cluster resource utilization, this paper proposes a quantifiable evaluation system for user job levels on HPC clusters. Using Shanghai Jiao Tong University’s supercomputer as an example, we detail the system’s design, including indicator selection, data processing, weighting, and index calculation. Indicators reflect job frequency, efficiency, and parallel computing skills. We employ logarithmic and normalization treatments, use the entropy method and AHP for weighting. We developed an open-source evaluation tool, which helps cluster administrators monitor the job levels of users, and guide users to get improvements.

– Ms Gao Yiqin, Engineer, Shanghai Jiao Tong University

05:25pm – 06:05pm Smart In-Situ Visualization using Information Entropy-based Viewpoint Selection and Smooth Camera Path Generation

In-situ visualization has received increasing attention as an effective approach to reduce the data I/O and storage demands, particularly in HPC-based large-scale simulations, where data is processed online as it is being generated rather than stored for later offline visual analysis. This work presents an alternative in-situ visualization approach based on smart visualization to generate a subset of rendering images to assist the offline interactive visual analysis tasks. It combines information entropy-based viewpoint selections with a smooth camera path interpolation to generate a sequence of time-lapse rendering images that can later be manipulated interactively via a GUI-based viewer.

– Mr Kazuya Adachi, Graduate student, Kobe University, RIKEN R-CCS

06:05pmClosing

Location: Orchid Ballroom (Level 4)

End SCA2025 Day 1 with an evening of networking, delicious food, and refreshing drinks!

Join us from 06:00 PM to 06:45 PM at the Exhibition Hall (Orchid Ballroom) for welcome drinks as you connect with fellow attendees and our partners.

Then, make your way to the Bayview Foyer from 06:45 PM to 08:00 PM to indulge in a sumptuous dinner while continuing engaging conversations and forging new collaborations.

Location: Melati Ballroom Foyer (Level 4) 

Registration begins from 08:00am.

Location: Poster Presentations / Delegate Lounge, Melati Jr Room 4010-4110 (Level 4) 

Winner for the SCA2025 Best Student Poster, will be announced at the SCA2025 Papers Breakout Track (Room – P5, Peony Jr) on Wednesday, 12 March 2025, at 03:35PM.

Location: Melati Ballroom (Level 4)

Abstract: Achieving the promise of AI “foundation” models for scientific discovery requires immense computational power and multidisciplinary collaboration. Although only a handful of organizations have the resources necessary to train these models at scale, new strategies have emerged using high-performance pretrained models, such as domain-specific fine-tuning, and agentic constructs. International collaboration is accelerating progress in key areas such as data preparation; evaluation for scientific reasoning, trustworthiness, and safety; and application-focused fine-tuning. This presentation highlights the progress of working groups convened by the international Trillion Parameter Consortium across these and other areas. Together, these efforts enable the scientific community to navigate a rapidly evolving AI landscape.

Artificial Intelligence

Location: Melati Ballroom (Level 4)

Abstract: HPC and AI are driving the next wave of scientific discovery and technological advancement. As workloads become increasingly complex, the need for scalable, energy-efficient, and high-performance compute solutions has never been greater. AMD is at the forefront of this evolution, delivering cutting-edge innovations that power AI-driven research, scientific simulations, and enterprise AI deployments.

Location: Melati Ballroom (Level 4)

Abstract: DDN presents a ground breaking new data platform for Massive Scale AI Development and Production. A modern, Software architecture that has been designed from the ground up to enable the next generations of AI model creation, by handling distributed data, massive metadata and vastly simplifying how organizations handle AI models through to production.

Artificial Intelligence

Location: Orchid Ballroom (Level 4)

Location: Melati Ballroom (Level 4)

Abstract: HPC and Supercomputing require balancing node and interconnect performance for optimal efficiency. HPE and AMD are revolutionizing AI-driven supercomputing with innovations in HPC systems, interconnect technologies, and AMD’s latest processors and accelerators, enabling scalable AI architectures. Learn how HPE and AMD are advancing open industry standards through initiatives like the Ultra Ethernet and Ultra Accelerator Link Consortia, and get insights into the future of supercomputing architectures.

Artificial Intelligence
HPC

Location: Melati Ballroom (Level 4)

Abstract: Explore Taiwan’s advancements in AI infrastructure, focusing on AI cluster design, innovative services, and workload optimization. Peter will highlight how Taiwan leverages high-performance computing, cloud-native technologies, and generative AI solutions to meet diverse industry demands. Real-world case studies showcase how AI clusters drive innovation, support digital transformation, and create significant opportunities for Taiwan’s AI hardware and software development and applications.

Artificial Intelligence

Location: Melati Ballroom (Level 4)

Abstract: As artificial intelligence (AI) continues to evolve, its integration with high-performance computing (HPC) is driving unprecedented breakthroughs in scientific discovery, industry applications, and national research initiatives. This thought leadership panel at Supercomputing Asia 2025 will bring together leading experts from academia, industry, and government to explore how AI is reshaping the landscape for HPC, economies and societies. The panel will delve into the future of AI in HPC, discussing innovations in hardware accelerators, AI-driven optimizations, and the evolution of supercomputing architectures to support large-scale AI models. Panelists will share insights on the latest advancements in AI-enhanced simulations, hybrid AI-HPC workflows, and the potential impact of quantum computing on AI performance. The session will also explore real-world applications, from national AI strategies and what’s required to stay ahead in the transformative era of the AI-HPC revolution.

Panellists:

  • Prof Satoshi Matsuoka
  • Mr Charlie Catlett
  • Dr Terence Hung
  • Dr Kimmo Koski
  • Dr Leslie Teo
  • Mr John Josephakis (Moderator)

Artificial Intelligence

Location: Bayview Foyer (Level 4)

Location: Room O3 – Orchid Jr 4211-2 (Level 4)

Abstract: Hybrid quantum-classical computing is emerging as a practical approach to integrating quantum with classical high-performance computing. This track will bring together speakers from industry and academia to discuss advancements, challenges, and real-world applications of hybrid systems. The session aims to provide insights into ongoing research, industry adoption, and the potential of hybrid computing in solving complex problems.


Track Chair: Dr Su Yi, Executive Director, IHPC and National Quantum Computing Hub Lead PI

[Invited Track]

Programme:

TimeSession
11:00am – 11:15amOpening and MoU Signing

11:15am – 11:45amAccelerated Quantum Supercomputing at NVIDIA

Quantum computing has the potential to offer giant leaps in computational capabilities, impacting a range of industries from drug discovery to portfolio optimization. Realizing these benefits requires pushing the boundaries of quantum information science in the development of algorithms, research into more capable quantum processors and error correction, and the creation of tightly integrated quantum-classical systems and tools. We’ll review the challenges facing quantum computing and reveal exciting developments in how AI supercomputing can help solved them.

– Dr Elica Kyoseva, Director for Quantum Algorithm Engineering, NVIDIA

11:45am – 12:15pmAdvanced Quantum Programming: Going beyond Quantum Circuits

Quantum computers have the potential to drastically outperform conventional computers for a variety of tasks, from simulating molecular interactions to machine learning. Getting to useful quantum computers rely on finding algorithms that are tailored to their microscopic nature, through using quantum interference and other quantum mechanisms, and having hardware that is able to run them reliably. Both hardware and software barriers still remain, but they are coming down fast because of advancements in the field. We are at an inflection point where useful quantum computing might not be too far away. One of the barriers that remain is that better ways of programming quantum computers are still needed. To that end, pushing the frontier of quantum programming languages will help simplify the task of programming quantum processors by going beyond gate-by-gate descriptions of quantum circuits and increasing levels of abstraction. At Horizon Quantum Computing, we are building tools to code, compile and deploy from these higher levels of abstraction, and move towards a fully automated synthesis of quantum algorithms.

– Dr Si-Hui Tan, Chief Science Officer, Horizon Quantum

12:30pm – 01:30pmLunch

01:30pm – 02:00pmQuantum Horizons: NQCH 3.0 Pioneering Industry-Driven Innovation in Singapore and Beyond

The National Quantum Computing Hub (NQCH) 3.0 represents Singapore’s strategic leap into transforming quantum potential into real-world impact. Supported by the Quantum Engineering Programme (QEP) 3.0, this initiative accelerates industry-driven quantum applications through targeted collaborations in high-value sectors such as computational biology, finance, logistics, and green chemistry. By fostering partnerships between industry leaders, quantum hardware providers, and research teams, NQCH 3.0 bridges quantum algorithms with practical business challenges—from drug discovery and portfolio optimisation to sustainable supply chains. The hub prioritises scalable Proof-of-Concepts (POCs), sovereign quantum software capabilities, and a vibrant R&D ecosystem, positioning Singapore as a regional innovation nexus. This keynote will explore how NQCH 3.0 integrates cross-sector expertise, aligns with current quantum hardware advancements, and cultivates talent to unlock economic viability and global competitiveness, ensuring Singapore remains at the forefront of the quantum revolution.

– Dr Su Yi, Executive Director, Institute of High Performance Computing (IHPC), A*STAR

02:00pm – 02:20pmJHPC Quantum Project for Building Quantum-HPC Hybrid Computing Platform

As the number of qubits in advanced quantum computers is getting larger over 100 qubits, demands for the integration of quantum computers and HPC are gradually growing for realizing “Quantum Utility”. The quantum computing technology is a promising component in near future HPC system to accelerate computational science. RIKEN R-CCS has been conducting JHPC quantum project, which is a project to design and build a quantum-supercomputer hybrid computing platform which integrates different kinds of on-premises quantum computers, superconducting quantum computer from IBM and trapped-ion quantum computer from Quantinuum, with supercomputers including Fugaku. In this presentation, the overview and current status of the JHPC quantum project with a perspective of quantum HPC hybrid computing will be presented.

– Prof Mitsuhisa Sato, Director, Quantum-HPC Hybrid Software Environment, Quantum-HPC Hybrid Platform Division, R-CCS

02:20pm – 02:40pmFull-Stack Quantum Middleware for Seamless Hybrid Quantum-Classical Integration

This talk explores quantum middleware’s crucial role in integrating hybrid quantum-classical algorithms. Focusing on Qibo, a full-stack open source quantum middleware framework, we demonstrate its capabilities for circuit design, simulation, and hardware deployment. We briefly introduce hybrid algorithms and showcase Qibo’s use in implementing hybrid models. Finally, we summarize challenges in achieving seamless integration, including optimizing communication, developing error mitigation techniques, and creating user-friendly tools for complex hybrid workflows. We discuss future directions for middleware development to unlock the full potential of hybrid quantum computing.

– A/Prof Stefanno Carrazza, Associate Professor, University of Milan

02:40pm – 03:00pmAccelerating Quantum Computing R&D with Amazon Braket

Amazon Braket, the quantum computing service by AWS, provides access to quantum processors with different programming paradigms and a variety of technologies. In concert with other services of the AWS cloud, Amazon Braket enables large-scale research projects producing verifiable and reproducible findings. Quantum devices can be accessed on-demand, via dedicated hybrid jobs, and through exclusive reservations, each being suitable for different stages of quantum computing research projects. By collaborating with top researchers from organization around the globe, Braket is accelerating the search for hybrid quantum classical algorithms for practical applications.

– Mr Peter Komar, Sr. Scientist, Amazon Web Services

03:00pm – 03:20pmQubit Efficient Quantum-Classical Computing and Applications in Logistics, Defense and Manufacturing

We will present some highlights of some of our industry deployments of our qubit efficient, hybrid quantum classical computing solutions, for real world optimization problems. The relevant use cases will be from logistics, energy, aviation and finance, sectors, all done in partnership with leading global corporations of these sectors. We will sketch the basic workings of our approach, allowing in certain cases up 200X improvements on the problem size solvable with current or near term quantum processors. If time, we will also present some examples of our ready to be deployed quantum software solutions and API offerings.

– Prof Dimitris Angelakis, Founder, AngelQ and Principal Investigator, CQT

03:30pm – 04:00pmTea Break

04:00pm – 04:20pmTowards Category Theory-Based Modeling for Large-Scale Quantum-Classical Optimization of Complex Enterprise Networks

Fault-tolerant quantum computers hold the potential to efficiently solve certain computational problems that are intractable for classical computers. However, the pathway to translating this quantum advantage into practical real-world applications remains uncertain. In this talk, I will present our efforts at QTFT to bridge this gap, with a particular focus on large-scale optimization of complex enterprise networks (CENs) that encompass multiple inter-connected silos, such as supply chain networks, logistics networks, energy grids, and aviation systems.

I will begin by addressing the common challenges associated with quantum optimization for CENs. Subsequently, I will explore how category theory can provide a robust framework for overcoming these challenges by enabling the modeling of CENs as systems of systems with reconfigurable interactions.

– Mr Jirawat Tangpanitanon, CEO and Co-founder of Quantum Technology Foundation (Thailand) – QTFT

04:20pm – 04:40pmGPU-accelerated Quantum Emulation: Towards Accurate Quantum Chemistry

Hybrid quantum-classical Adaptive Variational Quantum Eigensolvers (VQE) hold the potential to outperform classical computing for simulating quantum many-body systems. However, their practical implementation on current quantum processing units (QPUs) faces challenges in measuring a polynomially scaling number of observables during the operator selection so as to optimise a high-dimensional and noisy cost function. In this talk, I will present new results obtained with our in-house Hyperion-1 GPU-accelerated quantum emulator and explain how one can use it to perform fully adaptive-VQE (Variational Quantum Eigensolver) – large scale simulations – reaching the equivalent of hundreds of logical qubits.

– Prof Jean-Philip Piquemal, CSO and Co-Founder, QubitPharma

04:40pm – 05:00pmQuantifying Quantum Advantage with an End-to-End Quantum Algorithm for the Jones Polynomial

We present an end-to-end reconfigurable algorithmic pipeline for solving a famous problem in knot theory using a noisy digital quantum computer. Specifically, we estimate the value of the Jones polynomial at the fifth root of unity within additive error for any input link, i.e. a closed braid. This problem is DQC1-complete for Markov-closed braids and BQP-complete for Plat-closed braids, and we accommodate both versions of the problem. We demonstrate our quantum algorithm on Quantinuum’s H2 quantum computer and show the effect of problem-tailored error-mitigation techniques. Further, leveraging that the Jones polynomial is a link invariant, we construct an efficiently verifiable benchmark to characterize the effect of noise present in a given quantum processor. In parallel, we implement and benchmark the state-of-the-art tensor-network-based classical algorithms. The practical tools provided in this work allow for precise resource estimation to identify near-term quantum advantage for a meaningful quantum-native problem in knot theory.

– Dr Konstantinos Meichanetzidis, Head of Scientific Product Development, Quantinuum

05:00pm – 05:20pmQuantum Computers Accelerating Supercomputing Workflows

Quantum computers (QC) can significantly enhance high-performance computing (HPC) as accelerators with unique capabilities for solving challenging chemistry, materials science, and optimization problems. Hybrid HPC+QC system offers unique advantages that neither classical nor quantum simulations can achieve independently. Our collaboration between IQM, a leading quantum hardware company, and the Leibniz Supercomputing Centre (LRZ), a premier HPC center, has demonstrated practical integration of quantum and classical resources. In this talk, we discuss how researchers from University College London, were able to conduct multiscale molecular simulation, where Quantum-Selected Configuration Interaction (QSCI) is employed to investigate proton transfer in interacting water molecules. We present the details of our technical implementation, including hardware and software requirements, networking and selection of the appropriate space to house the quantum computer. We also discuss how QC can be integrated with minimal disruption into HPC workflows and the benefits of on-prem QC.

– Dr Hermanni Heimonen, Head of Product, IQM Quantum Computers

05:20pm – 05:40pmPotential and Challenges of Quantum Transformer Architectures for Bioinformatics Applications

Generative machine learning methods such as large-language models are revolutionizing the creation of text and images. While these models are powerful they also harness a large amount of computational resources. The talk revisits transformer architectures under the lens of fault-tolerant quantum computing. The talk discusses potential input and output models, and quantum subroutines for the main building blocks of the transformer, including attention computation, residual connections, layer normalization, and the feed-forward neural networks. We discuss the potential and challenges for obtaining a quantum advantage with an eye on classification tasks in bioinformatics.

– Asst Prof Patrick Rebentrost, Principal Investigator, Centre for Quantum Technologies

05:40pmClosing

Quantum Computing

Location: Room O4 – Orchid Jr 4311-2 (Level 4)

Abstract: Join us for an engaging presentation at the NSCC HPC Innovation Challenge track session! Our finalist teams will showcase their innovative AI solutions, revealing the unique use cases and insights they’ve gained from leveraging NSCC’s high performance computing to turn their ideas into impactful solutions. Don’t miss this opportunity to witness cutting-edge innovation in action!

Track Chair: Ms Angie Huang

[Invited Track]

Programme:

TimeSession
01:30pm – 03:30pmOpening & Introduction
– Ms Angie Huang, Senior Assistant Director (Strategy, Planning and Engagement), NSCC Singapore

Use Case 1 Sharing: Climate Action: Real-Time Emissions & Risk Intelligence Powered by HPC & AI

What if we could see emissions before they happen? WeavAir leverages high-performance computing (HPC), AI-driven predictive analytics, and multi-sensor networks to track, analyze, and reduce climate risks in real time. Our digital twin software integrates hyperspectral satellite data, IoT, and machine learning, offering organizations unparalleled insights into energy waste, carbon footprint, and climate vulnerabilities. Learn how WeavAir’s innovation is empowering industries, cities, and businesses to enhance climate resilience, meet ESG goals, and unlock new revenue streams through carbon markets.

Winner of HPCIC23 (Open Category)
– Natalia Mykhaylova (Team WeavInsight)

Use Case 2 Sharing: OwlShield: A Real-Time LLM Safety Firewall for Secure and Compliant AI Applications

OwlShield is a real-time LLM safety firewall that mitigates critical AI Security risks through a modular, low-code middleware solution. Our approach includes a suite of Shields—such as VectorDB Shield, Toxicity Shield, Privacy Shield, and Prompt Leakage Shield—to detect adversarial attacks, toxic content, and compliance violations.

Winner of HPCIC 2023 (Student Senior category)
– Lye Jia Jun, Alex Chien, Oh Tien Cheng, Wong Zhao Wu (Team OwlShield, NUS & SMU)

Use Case 3 Sharing:
(Oculis: Using Artificial Intelligence to Help the Visually Impaired Navigate Their Surroundings)


Oculis is an AI-powered mobile app that helps visually impaired individuals identify arriving buses in real time. Using a fine-tuned YOLO11 model trained on Singapore-specific bus data, it detects bus numbers and announces them to users. Future plans include expanding detection to road signs, landmarks, and travel destinations for a more comprehensive navigation aid.

Winner of HPCIC 2023 (Student Junior category)
– Lee Kiah Hong, Chia Wee Leong, Ryan Yeo, Jeyakumar Sriram (Team Water, SP)

Use Case 4 Sharing: (AI and HPC for Transforming Functional Accessibility)

Discover the transformative potential of High-Performance Computing (HPC) and Artificial Intelligence (AI) in crafting a groundbreaking mobile application. This intelligent app empowers visually impaired individuals to navigate their surroundings and access information effortlessly. By harnessing the latest advancements in language and vision models, the application redefines accessibility, enabling users to interact with their environment in smarter, more intuitive ways.

1st runner-up of HPCIC 2023 (Student Senior category)
– Duong Ngoc Yen, Wang Ruisi, Zou Zeren (Team Supernova, NTU)

Closing
– Ms Angie Huang, Senior Assistant Director (Strategy, Planning and Engagement), NSCC Singapore

Location: Room M2 – Melati Jr 4011/4111 (Level 4)

Abstract: This track explores the intersection of High Performance Computing (HPC) and Artificial Intelligence (AI) for accelerating key industrial, national, and global applications. Leading experts will discuss advancements in architecture and systems, including next-generation GPUs and AI accelerators. Topics include architectural breakthroughs, performance optimization, energy-efficient designs, and real-world applications. Attendees will gain insights into software-hardware co-design, tools for accelerating AI/ML models, and emerging hardware architectures for accelerating large models such as LLMs. This track enables collaboration among researchers, engineers and various industry players to tackle various aspects of AI development.


Track Chair: Dr Rick Goh, Director, Computing and Intelligence, IHPC, A*STAR

[Invited Track]

Programme:

TimeSession
01:30pm – 01:45pmOpening

– Dr Rick Goh, Director, Computing and Intelligence, IHPC, A*STAR

01:45pm – 02:00pmAI for Science Projects at Riken

AI for Science is becoming one of the core research activities at Riken, Japan’s premiere national lab for Science. Among the major efforts, TRIP-AGIS is the flagship project where we seek to establish generative AI models for Science across various disciplines such as biology and material science. The outcomes of the project will heavily affect the next generation HPC flagship project in Japan, FugakuNEXT, where we hope to achieve zettascale performance in AI capabilities and deep synergy with traditional HPC.

– Prof Satoshi Matsuoka, Professor, R-CCS

02:00pm – 02:15pm What’s Next in AI Starts Here

This talk explores how AI augments human intelligence instead of being at odds. Real-world examples increasingly demonstrate how AI partners with humans, from medical diagnostics to artistic creation. We discuss AI’s role in boosting creativity, decision-making, and productivity across various fields; and, the concept of human-AI symbiosis that emphasises collaboration over competition. The key takeaway: “The future isn’t AI vs. humans—it’s AI + humans.”

– Dr Ng Aik Beng, Senior Regional Manager, NVIDIA AI Technology Center

02:15pm – 02:30pmAn HPC-AI Fusion Approach Towards Earth System Modeling on Next-Generation Supercomputers

The Earth, as one of the most enduring yet complex research subjects, remains a paramount target for computational modeling across the globe. Despite decades of exponential increases in computing power—culminating in the era of kilometer-resolution Earth system modeling—we still face significant challenges in accurately simulating certain meteorological phenomena.

In this talk, we present our latest advancements in leveraging state-of-the-art supercomputers to further enhance weather and climate model capabilities. By tightly integrating high-performance computing (HPC)-based numerical models with data-driven AI methodologies, our approach aims to improve predictive accuracy for extreme events and address particularly challenging forecast windows, such as subseasonal and seasonal rainfall predictions. We believe that a robust, synergistic fusion of HPC and AI holds the promise of delivering groundbreaking breakthroughs in Earth system modeling.

– Prof Fu Haohuan, Professor, Tsinghua University

02:30pm – 02:45pmExpanding HPC Capabilities with AI Innovations

Research institutions around the world are building AI assets and cross-domain agents to implement complex scientific workflows. One of the largest challenges these institutions face is how to provide the performance users need at a sustainable cost and power level. SambaNova will discuss how a variety of research institutions are deploying production interference services based on SambaNova to address these challenges.

– Jennifer Glore, VP of Customer Engineering, SambaNova Systems

02:45pm – 03:30pmPanel Discussion 1: “Advancing AI through Next-Generation HPC”

In this panel, speakers from Session 1 will discuss future high-performance architectures, technologies and developments that will drive new capabilities in larges-scale AI.

– Prof Satoshi Matsuoka
– Marshall Choy
– Jennifer Glore
– Prof Fu Haohuan
– Dr Ng Aik Beng
– Moderated by Dr Rick Goh

03:30pm – 04:00pmTea Break

04:00pm – 04:05pmPitch 1: Multi-Agent AI Systems for Materials Innovation: Accelerating Discovery with Adaptive AI

This presentation explores the integration of multi-agent AI systems in materials innovation. Dr. Liu will demonstrate how adaptive AI, combined with active learning and uncertainty quantification, can optimize material properties from sparse experimental datasets. The session will outline key challenges in materials R&D and showcase DeepVerse’s proprietary platform—featuring solutions such as Lab Assistant and Auto Lab—that delivers breakthrough efficiency gains. Attendees will gain insights into the future of computationally assisted materials research.

– Dr Fredrik Liu, CEO and Co-founder, DeepVerse

04:05pm – 04:10pmPitch 2: HPC and AI: A Synergistic Partnership for the Future of Computing

This talk explores the synergistic relationship between High-Performance Computing (HPC) and Artificial Intelligence (AI), highlighting how this partnership is shaping the future of computing. HPC provides the computational power needed to train and deploy complex AI models, while AI algorithms optimize HPC systems, improving efficiency and performance. This synergy is driving breakthroughs across various fields, from scientific research and healthcare to finance and engineering. The talk will also discuss future trends, including AI-driven hardware architectures, edge computing, and quantum computing, and address the challenges and opportunities presented by this convergence.

– Christopher Yeo, CEO and Founder, Sentient.io

04:10pm – 04:15pmPitch 3: Meeting Users Where They Are, & Empowering Them To Go Where They Want, How They Want

How Tenstorrent is using open standards, hardware, and software to enable the next generation of researchers

– Felix Leclair, Field Application Engineer-HPC, Tenstorrent

04:15pm – 04:30pmModeling and Simulation or AI? How will HPC look like in 10 years?

AI is rapidly becoming the fastest emerging workload in HPC. However, traditional classical simulations, such as Molecular Dynamics, Engineering, and Weather and Climate modeling, continue to dominate the HPC landscape. This talk explores the future interplay between AI and classical simulation workloads. Will AI eventually supplant traditional methods in the medium term, or will we see the integration of AI-augmented simulations and simulation-augmented AI? We delve into the characteristics of such workloads across diverse fields, from Healthcare to Physical simulations, and examine scenarios where the synergy of AI and traditional simulations can be harnessed effectively, as well as instances where AI may completely take over. Join us as we outline the promising convergence of AI and classical simulations, paving the way for innovative advancements in HPC.

– Prof Torsten Hoefler, Professor, ETH Zurich

04:30pm – 04:45pmEfficient Large-Scale Training and Inference on Wafer-Scale Clusters

The Cerebras hardware and software stack is co-designed for efficient training and low-latency inference of large-scale models. Weight streaming execution allows distributed training in a strictly data-parallel form for models and clusters of arbitrary sizes, avoiding complex and time-consuming hybrid distribution techniques. A large pool of on-chip memory enables ultra-low-latency autoregressive inference. Leveraging our experience in training large language models (LLMs) and multi-modal models, we will share insights into optimizing model architectures and training strategies for compute-efficient training. Additionally, we will explore hardware-optimized LLM mapping to wafer-scale clusters for low-latency autoregressive inference.

– Dr Natalia Vassilieva, VP and Field CTO, Cerebras

04:45pm – 05:00pmHybrid Quantum-Classical Computing for Next-Gen AI

Quantum computing is poised to revolutionize supercomputing and its applications, particularly through enabling the development of next-gen AI with quantum machine learning (QML). Novel QML models can be developed by leveraging near-term GPU-QPU hybrid systems through GPU emulation and hybrid variational quantum approaches utilizing both classical and quantum hardware resources. Recent advancements demonstrate the potential of QML to achieve precision and generalizability on par with, or surpassing, classical models in processing real-world data and tasks. This talk will explore the integration of quantum computing with GPUs, highlighting the transformative potential of hybrid quantum-classical systems in developing next-gen AI.

– Dr Jie Luo (Roger), Co-Founder and CEO, Anyon Technologies

05:00pm – 05:45pmPanel Discussion 2: “Empowering Industry Technologies through HPC and AI”

In this panel, speakers from Session 2 will discuss the ongoing developments and challenges in bridging HPC-AI and essential use cases across science and industry. We will explore the role of HPC-AI as a catalyst in areas including climate science, modelling and simulation, quantum computing.

– Prof Torsten Hoefler
– Dr Natalia Vassilieva
– Dr Jie Luo (Roger)
– Dr Liu Yong
– Moderated by Dr Yang Liwei

05:45pmClosing

Location: Room L1 – Lotus Jr (Level 4)


Track Chair: Mr Tommy Ng, NSCC

[Invited Track]

TimeSession
01:30PM – 01:50PMPowering the Next Wave of Generative AI for Scientific Innovation and Discovery

Founded in Silicon Valley in 2017, SambaNova Systems develops full-stack AI systems based on a unique reconfigurable dataflow architecture. SambaNova’s AI platform, best-in-class for the generative AI, allows it to contribute to increasingly large-scale scientific and technical computations.

This session will showcase SambaNova’s latest initiatives in AI for Science, conducted in collaboration with leading institutions such as Argonne National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, and the Texas Advanced Computing Center.

– Mr Marshall Choy, Senior Vice President, Products, SambaNova Systems

01:50PM – 02:10PMAccelerating the Future: Cloud HPC Across Industries

High Performance Computing on AWS is revolutionizing research and innovation across diverse industries including life sciences, financial services, manufacturing and more. Our comprehensive HPC solution encompasses advanced compute, network, and storage infrastructure, coupled with powerful orchestration tools, designed to tackle the most complex computational challenges. By deeply understanding customer use cases, we’ve engineered cloud-based HPC solutions that simplify the orchestration of sophisticated environments, allowing researchers and businesses to focus on groundbreaking discoveries rather than infrastructure management. Join us to explore how AWS is transforming the HPC landscape, making supercomputing-class resources accessible and cost-effective for organizations of all sizes, and accelerating time-to-insight across various scientific and industrial domains.

– Mr Ian Colle, General Manager of Advanced Computing and Simulation, Amazon Web Services

02:10PM – 02:30PMFugakuNEXT Project for Pioneering the Post-Exascale Supercomputing Era

The demand for high-performance computing continues to grow, serving as a vital foundation for both scientific research and AI. As we move beyond the exascale era, developing next-generation supercomputing systems is essential to establish a new computing infrastructure for these fields. RIKEN has been selected as the lead institution to develop Japan’s next national flagship supercomputer, the successor to Fugaku. In January 2025, we officially launched the FugakuNEXT project to develop and deploy this new system, aiming to achieve world-class performance in both simulation and AI. In this talk, we provide an overview of the FugakuNEXT project.

– Dr Masaaki Kondo, Team Leader, RIKEN Center for Computational Science

02:30PM – 02:50PMQuantum computers accelerating supercomputing workflows


Quantum computers (QC) can significantly enhance high-performance computing (HPC) as accelerators with unique capabilities for solving challenging chemistry, materials science, and optimization problems. Hybrid HPC+QC system offers unique advantages that neither classical nor quantum simulations can achieve independently. Our collaboration between IQM, a leading quantum hardware company, and the Leibniz Supercomputing Centre (LRZ), a premier HPC center, has demonstrated practical integration of quantum and classical resources. In this talk, we discuss how researchers from University College London, were able to conduct multiscale molecular simulation, where Quantum-Selected Configuration Interaction (QSCI) is employed to investigate proton transfer in interacting water molecules. We present the details of our technical implementation, including hardware and software requirements, networking and selection of the appropriate space to house the quantum computer. We also discuss how QC can be integrated with minimal disruption into HPC workflows and the benefits of on-prem QC.

Keywords: Hybrid HPC+QC computing, Quantum-Selected Configuration Interaction (QSCI), multiscale molecular simulation, proton transfer, high-performance computing, quantum computing, integration, scalability.

– Mr Hermanni Heimonen, Head of Product, IQM Quantum Computers

02:50PM – 03:10PMAccelerating the Future: AMD EPYC in HPC and AI

This presentation highlights how AMD EPYC processors transform HPC and AI Workloads with exceptional performacne, scalability, and energy efficiency. Explore real-world use cases and architectural innovations that enable faster results and reduced costs. From accelerating scientific research to optimizing AI Training and inference, AMD EPYC empowers organizations to achieve breakthrough outcomes while driving sustanable computing solutions.

– Mr Raghu Nambiar, CVP Software and Solutions, AMD

03:10PM – 03:30PMHitachi, Your Industrial Partner for AI Innovation

Hitachi is a company with over one hundred years of experience in industrial and operational technologies, with deep roots in consumer technologies, public transportation, energy generation, construction, and mining, in addition to many other areas. Its commitments to customer experience and excellence in outcomes, has led the company to leverage technology as a key competitive advantage. Today. that same platform for innovation and knowledge is being brought to customers across the world. Join us in this session as we explore the strategy on how Hitachi is helping customers of all sizes super charge their AI capabilities and meet their business aspirations.

– Mr Jason Hardy, VP & CTO, Client Strategy & Artificial Intelligence, Hitachi Vantara

03:30PM – 04:00PMTea Break
Tea Break is served at the Exhibition Room (Orchid Ballroom).

04:00PM – 04:20PMBuild and manage complex HPC, AI and data analytics environments, and run them anywhere

The convergence of artificial intelligence (AI), high performance computing (HPC) and data analytics is being driven by a proliferation of advanced computing workflows that combine different techniques to solve complex problems. AI and data analytics can augment traditional HPC workloads to speed scientific discovery and innovation. At the same time, data scientists and researchers are developing new processes for solving problems at massive scale that require HPC systems. While this convergence is accelerating discovery and innovation, it’s also putting pressure on IT to support ever more complex environments. The Omnia software stack helps speed and simplify the process of deploying and managing environments for mixed workloads.

– Mr Roshan Kumar, Director of AI Specialty Presales Asia Pacific, Japan and Greater China, Dell Technologies

04:20PM – 04:40PMAccelerate and Simplify AI Adoption with Pure Storage AI-Ready Infrastructure

Learn how to gain unmatched density, industry-leading reliability, and consistent performance at scale with a modern architecture purpose-built for flash technology. Enable seamless AI workloads across multi-GPU environments, transforming AI workflows, enhancing decision-making processes, and driving breakthroughs across organisations such as Kakao and a local government agency.

– Mr Lam Kuet Loong, Principal Technologist, ASEAN & Greater China, Pure Storage

04:40PM – 05:00PMMeeting Users Where They Are, & Empowering Them To Go Where They Want, How They Want

How Tenstorrent is using open standards, hardware, and software to enable the next generation of researchers

– Mr Felix Leclair, Field Application Engineer – HPC, Tenstorrent

05:00PM – 05:20PMThe Future Need for Liquid Cooled Data Centres

Discover how HPE and AMD are co-innovating to revolutionize AI and supercomputing with energy-efficient solutions and carbon neutral initiatives. Hear how power-efficient technologies, combined with advanced liquid-cooling systems drive significant energy savings and carbon footprint reductions. Learn from real-world deployments – the benefits and trade-offs, to consider when implementing high-performance, sustainable AI workloads.

– Prof Ben Bennett, Senior Director, HPC & AI Marketing, Hewlett Packard Enterprise

05:20PM – 05:40PMCatalyzing the Convergence of HPC and AI Workloads with a Heterogeneous Acceleration Platform

This session will explore how the scalability, flexibility, and high performance of heterogeneous converged infrastructure can accelerate HPC and AI workloads. Attendees will gain insights into how modern infrastructure—designed to integrate diverse heterogeneous resources—effectively addresses the unique demands of HPC and AI workloads while delivering enhanced processing power, efficiency, and innovation across applications driven by these workloads.

Key takeaways:

– The pivotal role of modern heterogeneous infrastructure in accelerating HPC and AI workloads.
– Proven strategies for optimizing scalability and flexibility to accelerate HPC and AI workloads.
– How converged heterogeneous infrastructure empowers innovative scenarios driven by HPC and AI workloads.

– Mr Dennis Juan, Associate Vice President, QCT Singapore

05:40PM – 06:00PMoneAPI and Intel Software Developer Tools

The speaker will highlight the benefits of using Intel oneAPI tools on IA plaform.Also will talk about Freedom, Productivity, and Performance for Accelerated Computing using oneAPI Toolkits

– Mr Ritesh Kulkarni, Software Sales Strategist Software Product Category (APJ,India), Intel

Artificial Intelligence
Exascale Computing

Location: Room P6 – Peony Jr 4511-2 (Level 4)

Abstract: The provision of weather forecasts via Numerical Weather Prediction (NWP) has traditional been a major application of large-scale supercomputers. Similarly, compute-intense model-based projections of future climate have contributed significantly in recent decades to our understanding and preparations for changes climate in coming decades. This session will review some of the key benefits, computational challenges and new science and technical solutions in the field of weather and climate science. On weather timescales in particular, a revolution is underway through the rapidly expanding use of AI-based approaches trained on the vast numbers of observations archived over recent decades. The use of AI on climate change timescales is also increasing, but frequently takes different forms due to the inherent lack of observations of future climate. This session will include perspectives from universities, the private sector and agencies on the role of HPC/AI in the addressing the challenge of dealing with extreme weather and climate change, both now and in the future.


Track Chair:
Prof Dale Barker, Director, Centre for Climate Research Singapore, Meteorological Service Singapore

[Invited Track]

Programme:

TimeSession
01:30pm – 03:30pmOpening

– Prof Dale Barker, Director (CCRS)

Computation and AI for High Performance Climate

In this talk, we will explore the role of computation and artificial intelligence (AI) in advancing high-performance climate modeling. We will delve into how computation can simulate and predict climate change, addressing the limitations of current climate simulations. With this problem in mind, we will discuss the onset of a new computational era, highlighted by the rapid growth of AI infrastructure investments. We realize how this development can help us to drive finer-grained climate models and discuss the challenges and computational demands required to achieve higher resolutions. We then dive into ML-based models with the example of diffusion-based models for data assimilation, showcasing opportunities and improvements over traditional methods. Additionally, we will show pitfalls with existing AI-based weather forecasting networks, using case studies such as Storm Ciaran and Typhoon Doksuri to underscore their limitations in predicting extreme events and capturing underlying physics. We will conclude with an overview of the Swiss AI climate initiative, a project aimed at developing a foundational model for climate and weather prediction by integrating more data, advanced models, and higher resolutions. Efforts to democratize access to climate data through compression techniques will also be highlighted.

– Prof Torsten Hoeffler, ETH, Switzerland

Developments in Generative AI for Climate and Weather

Generative AI (GenAI) has become extremely popular in recent times with the advent of LLMs like ChatGPT and image/video generation tools like Stable Diffusion, DALL-E, etc. GenAI methods have also shown great promise in the Climate and Weather domain. Early work with GANs has been augmented with newer generative methods, such as denoising diffusion models and score-based generative models. This talk discusses the features of these latest models and their potential applications. We will also share recent progress on use-cases of generative models for climate and weather modelling.

– Dr Jeff Adie, NVIDIA

Preparations for Next-Generation Weather and Climate Modelling at the Centre for Climate Research Singapore (CCRS)

The increasing diversity of large-scale HPC architectures available to run compute-intense numerical weather prediction (NWP) and climate projection simulations requires a fundamental rewrite of the traditional physical models that have underpinned weather and climate science for decades.

At the same time, the advent of AI/ML-based approaches to simulate physical and dynamical processes within the atmosphere provides an opportunity to replace aspects of the physical models (or even replace them completely) with more computationally efficient, and often more accurate, AI-based algorithms.

This talk will provide an overview of CCRS’ activities to move from the current generation of physical models and data assimilation algorithms based on the Unified Modelling (UM) System. Next-generation weather and climate modelling for the Singapore region has specific requirements, given the unique tropical urban nature of the weather and climate in the region. Despite this, significant progress has been made in recent years to add value to global predictions/projections through regional NWP And climate projections using km-scale models. A brief description of past efforts, plus plans to move the the next-generation SINGV_NG system will be presented.

– Prof Dale Barker, Professor, CCRS

Potential for Impact in Modelling and Simulation via AI Innovations in Weather Forecasting and Climate Change Impact Assessment

Recent transformative AI advances in short-term weather forecasting and long-term climate projections are an exciting development with major implications for downstream tasks. In this talk, we first illustrate how weather forecasting and climate simulations can enhance numerical models for critical downstream applications such as urban planning and environmental management. For example, accurate weather predictions can support operational decision-making in pollutant and chemical dispersion modeling as well as urban air quality management. Similarly, reliable long-term climate projections can inform sustainable and resilient urban design. The integration of AI with physics-based models enables actionable insights, thereby empowering decision-makers in the cities and industry of tomorrow.

– Dr Ooi Chin Chun, A*STAR

03:30pm – 04:00pmTea Break

04:00pm – 06:00pmExplainable Natural Language Processing for Corporate Sustainability Analysis

Sustainability commonly refers to entities, such as individuals, companies, and institutions, having a nondetrimental (or even positive) impact on the environment, society, and the economy. With sustainability becoming a synonym of acceptable and legitimate behaviour, it is being increasingly demanded and regulated. Several frameworks and standards have been proposed to measure the sustainability impact of corporations, including United Nations’ sustainable development goals and the recently introduced global sustainability reporting framework, amongst others. However, the concept of corporate sustainability is complex due to the diverse and intricate nature of firm operations (i.e. geography, size, business activities, interlinks with other stakeholders). As a result, corporate sustainability assessments are plagued by subjectivity both within data that reflect corporate sustainability efforts (i.e. corporate sustainability disclosures) and the analysts evaluating them. This subjectivity can be distilled into distinct challenges, such as incompleteness, ambiguity, unreliability and sophistication on the data dimension, as well as limited resources and potential bias on the analyst dimension. Put together, subjectivity hinders effective cost attribution to entities non-compliant with prevailing sustainability expectations, potentially rendering sustainability efforts and its associated regulations futile. To this end, we argue that Explainable Natural Language Processing (XNLP) can significantly enhance corporate sustainability analysis. Specifically, linguistic understanding algorithms (lexical, semantic, syntactic), integrated with XAI capabilities (interpretability, explainability, faithfulness), can bridge gaps in analyst resources and mitigate subjectivity problems within data.

– Prof Erik Cambria, NTU

Toward integration of ML/NWP/DA

Machine learning (ML)-based models for weather and climate have been evolving rapidly. Most of these ML models are trained on existing atmospheric reanalysis data produced by conventional numerical weather prediction (NWP) and data assimilation (DA) systems, which can also be further improved by ML. At RIKEN, we are investigating various applications of ML in the NWP-DA framework. We will present a precipitation nowcasting system by combining ML and NWP, a ML-based observation operator for satellite radiances, a combination of ML and DA to generate better training data for ML surrogate, and a regional NWP with a ML-based model.

– Dr Shigenori Otsuka, RIKEN

Digital Twins of the Earth’s Weather: To and from (X)AI

Artificial Intelligence (AI) is transforming weather and climate science, offering new ways to enhance predictions, improve model efficiency, and uncover hidden patterns in complex datasets. In the first part of this talk, I will first present our work that led us to AI, taking the route of physics and dynamical systems theory. I will then show some more recent work that brings us back from AI to physics and nonlinear dynamics, where the key is bridging human and machine knowledge.

– Asst Prof Gianmarco Mengaldo, NUS

[Topic TBC]

– JAMSTEC

Closing

– Prof Dale Barker, Director (CCRS)

Artificial Intelligence
HPC

Location: Room P5 – Peony Jr 4411-2 (Level 4)

Track Chair: Prof DK Panda

[Peer-Reviewed]

Programme:

TimeSession
01:30pm – 01:35pmVisualization, Storage, and Application

– Prof Yongqing Zhu, Associate Professor, Singapore University of Social Sciences

01:35pm – 02:15pmNonlinear Model Predictive Control Based on Nonlinear Autoregressive Exogenous Model for Energy Efficient Heat Removal Module in Data Centers

The Heat Removal Module (HRM) developed by KoolLogix Pte Ltd is a gravity-driven, refrigerant-based and passive heat removal cooling solution. we introduce a multiple inputs and multiple outputs nonlinear model predictive control (NMPC) system based on data-driven nonlinear autoregressive exogenous model (NARX) for HRM in data center. The NARX model is validated with the measurement data and the control performance is tested in-silico for different heat power loads. The results indicate that the developed NMPC controller can stably bring the cooling system reaching to a new temperature set-point just in 8.0 minutes.

– Dr Tran Si Bui Quang, Scientist, Institute of High Performance Computing, A*STAR

02:15pm – 02:55pmEmpowering Generative AI in Enterprises: Sustainable, High-Performance Storage Solutions with Global Data Fabric and Data Lakehouse

Rapid growth in unstructured data has triggered enterprises to adopt unified file/object storage solution, for managing both traditional and modern data workloads. By 2029, over 80% of unstructured data will reside on consolidated storage systems, up from 40% in 2024. As sustainability becomes important, businesses must use storage solutions with minimal environmental impact. This paper presents a storage solution for Generative AI and HPC, focusing on: – Critical role of high-performance storage with unified file/object access. – Essential sustainability practices. – Global data fabric/data lakehouse use case. It demonstrates how AI researchers, architects, and CIOs can leverage next-generation efficient and sustainable storage.

– Mrs Madhu Thorat & Mrs Sarah Walters, Software Architect / Research Systems Projects and Delivery Manager, IBM / The University of Queensland – Research Computing Center

02:55pm – 03:35pmTransforming Urban Wind Engineering by Taming Extreme Weather Strong Winds Over Urban Skylines with Ultra-High-Resolution Simulations on Supercomputer Fugaku

This study introduces a novel “Urban Fencing” concept to mitigate the impact of extreme winds, such as typhoons, over urban skylines. Using ultra-high-resolution 2-meter urban mesh simulations on the Fugaku supercomputer, we analyzed turbulence and energy cascades with a new framework using CUBE large-eddy simulation (LES) solver and ensemble empirical mode decomposition. This study demonstrates that urban fencing can help shield against strong winds. It does this by reducing intense turbulence caused by strong winds, lowering turbulent stress, and limiting turbulent transport, which can also help lessen the heavy rainfall in cities. These findings underscore urban fencing’s potential to reduce wind-related hazards and improve urban resilience.

– Dr Konduru Rakesh Teja, Postdoctoral Researcher, RIKEN Center for Computational Science, Kobe, Japan

03:35pm – 03:55pmBest Paper Award, Best Poster Award, Closing Remarks

– Best Paper Award (Prof Dhabaleswar K (DK) Panda)
– Best Poster award (Dr Atsuko Takefusa & Dr Jin Hongmei)
– Closing Remarks (Prof Dhabaleswar K (DK) Panda)

Location: Orchid Ballroom (Level 4)

Location: Melati Ballroom Foyer (Level 4) 

Registration begins from 08:00am.

Location: Poster Presentations / Delegate Lounge, Melati Jr Room 4010-4110 (Level 4) 

Winner for the SCA2025 Best Student Poster, will be announced at the SCA2025 Papers Breakout Track (Room – P5, Peony Jr) on Wednesday, 12 March 2025, at 03:35PM.

Location: Melati Ballroom (Level 4)

Abstract: Quantum computing represents a transformative leap in computational power, promising to solve complex problems that are intractable for classical systems. This talk explores the burgeoning field of quantum computing, with finance as an example. By harnessing superposition and entanglement, quantum computing can improve the speed and accuracy of financial modelling, enabling real-time risk assessment and more precise derivative pricing. The talk will also address key challenges in translating theoretical advancements into real-world solutions, providing a forward-looking perspective on quantum computing’s role in shaping the future of innovation.

Quantum Computing

Location: Melati Ballroom (Level 4)

Abstract: AI data centers are a different ball game – these up and coming facilities are being designed and constructed has been overhauled to accommodate high-density workloads. This represents unique challenges for operators and designers alike, ranging from temperature availability to consistent power availability.

This plenary session aims to shed light on important design and engineering imperatives the industry in general needs to consider in adapting to high-density compute workloads, and how will it impact existing and upcoming facilities.

Location: Room O7 – Orchid Jr 4311 (Level 4)

Abstract: The rapid evolution of ARM architecture has catalyzed significant advancements in high-performance computing (HPC) and artificial intelligence (AI), particularly in scientific research. This workshop aims to explore the potential of ARM-based systems as a transformative force in computational science. With the growing demand for efficient, scalable, and energy-conscious computing solutions, ARM’s unique architecture offers compelling advantages, including enhanced performance per watt and flexibility across diverse applications.

Participants will engage with leading experts in the field, discussing the integration of ARM-based platforms in various scientific domains, such as genomics, climate modeling, and particle physics.

We will showcase case studies demonstrating successful implementations of ARM systems in HPC environments, highlighting innovations in algorithm design and optimization techniques that leverage ARM’s capabilities. Additionally, the workshop will address the challenges and opportunities presented by the adoption of ARM architecture, including software compatibility, ecosystem development, and future directions for research. Interactive sessions will facilitate collaboration among attendees, fostering dialogue on best practices and strategies for maximizing the potential of ARM technology in scientific workflows.

By bridging the gap between hardware advancements and scientific applications, this workshop aspires to inspire new research collaborations and drive the next wave of innovation in computational science. We invite researchers, industry professionals, and students to join us in exploring the future of ARM-based systems and their impact on scientific discovery.

Together, we can harness the power of ARM architecture to propel scientific endeavors into new frontiers, ensuring that the next generation of research is both efficient and impactful.

Workshop URL: https://hpc.sjtu.edu.cn/abs4s

HPC

Location: Room P10 – Peony Jr 4512 (Level 4)

Abstract: The use of containers has revolutionized the way in which industries and enterprises have developed and deployed computational software and distributed systems. This containerization model has gained traction within the HPC community as well with the promise of improved reliability, reproducibility, portability, and levels of customization that were not previously possible on supercomputers. This adoption has been enabled by a number of HPC Container runtimes that have emerged including Singularity, and others.

This hands-on tutorial looks to train users on the use of containers for HPC use cases. We will provide a detailed background on Linux containers, along with an introductory hands-on experience building a container image, sharing the container and running it on a HPC cluster. Furthermore, the tutorial will provide more advanced information on how to run MPI-based and GPU-enabled HPC applications, how to optimize I/O intensive workflows, and how to setup GUI enabled interactive sessions. Cutting-edge examples will include machine learning and bioinformatics. Users will leave the tutorial with a solid foundational understanding of how to utilize containers on HPC resources using Singularity and Podman, and in-depth knowledge to deploy custom containers on their own resources.

Exascale Computing

Location: Room P11 – Peony Jr 4511 (Level 4)

Abstract: Learn to leverage cloud technologies for enhancing High Performance Computing (HPC) workflows through this comprehensive tutorial. Participants will explore how the cloud provides the scalability, agility, and flexibility needed for modern HPC applications, including weather modeling, computational fluid dynamics (CFD), financial services, and genomic analysis.

The session combines progressive lectures with hands-on labs using temporary AWS accounts, covering cloud foundations and best practices for HPC workloads. Participants are also provided with the opportunity to run scientific workloads using the Virtual Fugaku container, offering practical experience with cutting-edge cloud-based HPC resources. A guest speaker from RIKEN-CCS will provide insights into how Virtual Fugaku complements the Fugaku Supercomputer’s on-premises capabilities, demonstrating the powerful synergy between cloud and traditional HPC infrastructures.

This tutorial is ideal for scientists, engineers, and HPC specialists looking to expand their expertise in cloud-native technologies for HPC applications.

Important Notes: Attendees need to bring a WiFi capable laptop with a modern browser (Chrome, Firefox). Linux command line familiarity and basic scripting abilities, such as bash/zsh are required for the hands-on portions of this tutorial.

For any enquiries: Please email shutsui@amazon.com

Agenda:

TimeTopic
9:30am – 9:45amWelcome and Introduction
9:45am – 10:15amTalk: Cloud Fundamentals for HPC
10:15am – 10:30amLab 0: Access the workshop
10:30am – 10:45amMorning Break
10:45am – 12:00pmLab 1: Create an HPC cluster and run a simulation on AWS Parallel Computing Service (PCS)
12:00 – 12:15pmDemo: AWS Research Engineering Studio (RES) with PCS
12:15 – 1:30pmLunch Break
1:30pm – 2:00pmTalk: Hybrid HPC Concepts / What is Virtual Fugaku?
2:00pm – 3:00pmLab 2: Portable HPC using Apptainer and AWS ParallelCluster
3:00pm – 3:20pmTalk: Cloud-native HPC with AWS Batch
3:20pm – 3:45pmTea Break
3:45pm – 5:15pmLab 3: Run multiple HPC workloads with AWS Batch
5:15pm – 5:30pmSummary and Q&A

HPC

Location: Room P8 – Peony Jr 4412 (Level 4)

Abstract: In 2023, NSCC established the Alliance of Supercomputing Centres (ASC) the members of which include HPC Centres from 15 countries across the world. The main purpose of the alliance is to collaboratively enhance their capabilities in HPC and related technologies and competences.

In 2024, a Special Interest Group (SIG) was formed within ASC to focus on the challenges, ongoing efforts and expertise required to interface HPC and Quantum Computing (QC) for hybrid Quantum-HPC hardware and software infrastructure, use-cases and skills.

This ASC SIG Quantum-HPC Workshop proposed for SCA25 is the first public event of this SIG with a focus on ongoing efforts, resources and expertise of members of the ASC that are working on Quantum-HPC interfacing.

The objectives of this workshop are:

  1. To present and share technical and community building activities for Quantum-HPC interfacing.
  2. To identify mutual interests and opportunities for collaboration and co-development of Quantum-HPC interfacing where possible, both among the ASC members and with the wider HPC and QC communities.
  3. To synergise with other regional and international activities with similar objectives (e.g. IEEE Quantum-HPC Working Group, IEEE Standards Committee for Hybrid Quantum-Classical Computing, and European working groups linked to EuroHPC, European Quantum Flagship and European Quantum System Software)

The key takeaways from this workshop will be:

  • For ASC, concrete directions and topics for engagement among the members and with other regional and international activities.
  • For the wider workshop audience, awareness of activities within major HPC Centres for Quantum-HPC interfacing and opportunities for engagements / contributions to co-develop infrastructure and / or use-cases for hybrid Quantum-HPC systems.

Hosting this workshop at SCA25 also brings the focus from across the work to work closer with the HPC and QC community in APAC.

For any enquiries, please contact: venkatesh.kannan@ichec.ie

Workshop URL: https://www.ichec.ie/events/scasia2025-asc-sig-quantum-hpc

Programme:

TimeSession
09:30am – 09:35amWelcome by ASC

Welcome to participants
Introduction to ASC

– Mike Sullivan, NSCC

09:35am – 09:45amASC SIG Quantum-HPC

Introduction to ASC SIG Quantum-HPC.
Survey background and expectations from audience

– Venkatesh Kannan, ICHEC

09:45am – 10:00amEfforts in Singapore

NQCH roadmap in integrating QC with HPC: Perspectives on Interfaces, System Software, Middleware and Applications

– Ye Jun, ASTAR IHC & Q.InC*

10:00am – 10:15amEfforts in Japan

JHPC quantum project to design and build a quantum-supercomputer hybrid computing platform

– Mitsuhisa Sato, RIKEN

10:15am – 10:30amEfforts in Luxembourg

Luxembourg’s Quantum Awareness and Ecosystem Initiative

– Alban Rousset, LuxProvide

10:30am – 11:00amTea Break

11:00am – 11:15amEfforts in Australia

Pawsey’s Quantum Supercomputing Innovation Hub: Bridging HPC and Quantum Computing

– Pascal Jahan Elahi, Pawsey Supercomputing Research Centre

11:15am – 11:30amEfforts in South Korea

Software Development for a Full-Stack Quantum Computer and Hybrid computing by KISTI

– Junghee Ryu, KISTI

11:30am – 11:45amEfforts in Ireland

Quantum Programming Ireland Initiative, EuroHPC HPCQS, EuroHPC EuroQCS-France

– Venkatesh Kannan, ICHEC

11:45am – 12:00pmRelevant initiatives

IEEE Quantum-HPC WG, European Quantum System Software Summit

– Venkatesh Kannan, ICHEC

12:00pm – 12:30pmPanel & open discussion

Q&A with open floor discussion with presenters about technical challenges, priority focus areas, opportunities for collaboration

– Venkatesh Kannan, ICHEC

Quantum Computing

Location: Room P9 – Peony Jr 4411 (Level 4)

Abstract: The convergence of AI and High-Performance Computing (HPC) is revolutionizing deep learning workflows, enabling scalable model training, fine-tuning, and inference. As AI workloads grow in complexity, HPC systems—originally designed for scientific simulations—are evolving with cutting-edge hardware like GPUs and high-speed interconnects, making them ideal for AI-specific tasks. This transformation is fostering new practices in HPC, such as containerized environments, distributed training, and optimized resource management. This tutorial will provide a comprehensive overview of best practices for utilizing HPC platforms for AI development. Key topics include distributed training with PyTorch’s Distributed Data Parallel (DDP), horizontal and vertical scaling approaches, and container technologies like Enroot for reproducibility. Attendees will gain hands-on experience in setting up HPC environments, managing workloads, and scaling models, equipping them to leverage HPC infrastructure for high-performance AI model training.

Workshop URL:
GitHub: https://github.com/snsharma1311/SCA-2025-DistributedTraining (Will be up by 15th Feb, 2025)

Important Notes/ Prequisites:

  1. Participants are required to bring their own laptops for hands-on. Please install ssh client for remotely accessing the HPC system.
  2. Participants may find sample codes and references from out GitHub repository (https://github.com/snsharma1311/SCA-2025-DistributedTraining) by 15-02-2025. Additionally, we’ll be updating information like VPN setup and other pre-requisites etc. in the repository as well.
  3. The tutorial is designed for intermediate-level users who are familiar with HPC systems and have experience in AI development. Ideal participants should have basic understanding of:
    • HPC and AI software stacks
    • Python programming
    • Linux environments
    • PyTorch for deep learning

POC Details: For enquiries, please contact: shashank.sharma@cdac.in

Agenda:

Introduction to AI-focused HPC Setups (30 minutes)
Presenter: Mr. Shashank Sharma/ Mr. Anandhu Nair

Theory & Hands-On

  • HPC hardware configurations for AI workloads: CPUs, GPUs, and networking.
  • Software tools: Environment setup, libraries, virtual environments.
  • Job management with SLURM and container technologies.

Distributed Training Concepts (30 minutes)
Presenter: Mr. Shashank Sharma

Theory

  • Scaling strategies: Vertical vs. Horizontal scaling.
  • Distributed training theory: data, model and hybrid parallelism.

Distributed Training with PyTorch (1 hour)
Presenter: Mr. Kishor Y D

Hands-On

  • Multi-GPU training using PyTorch DDP.
  • Code walkthrough and practical exercises with SLURM.

Containerized Training with Enroot & DeepSpeed Demonstration (1 hour)
Presenter: Ms. Sowmya Shree

Hands-On & Demo

  • Containerizing deep learning workflows.
  • Best practices for Enroot in multi-node environments.
  • DeepSpeed Demonstration – How to train with less resources

HPC

Location: Room O6 – Orchid Jr 4312 (Level 4)

Abstract: Variational quantum algorithms belong to the class of hybrid classical-quantum computation, leveraging both classical as well as quantum compute resources. These algorithms are widely believed to be promising candidates for first demonstration of useful application of quantum computation in areas such as quantum chemistry, condensed matter simulations, and discrete optimization tasks. Variational algorithms use a parametrized quantum circuit ansatz to estimate the lowest-energy eigenvalue of a Hamiltonian encoding an underlying problem-specific objective function and progress by iterative execution of the parametrized circuit, passing back the result of each computation to a classical optimizer which updates the circuit parameters until a convergence or stopping criteria is satisfied. In real-world quantum hardware, the noise of the device effectively limits the number of circuit operations which can be faithfully executed and degrades the quality of the results of the computation, which affects the convergence of the optimization performed on the classical computer.

In this tutorial, we walk the participants through the practical steps in the execution of a variational quantum algorithm for realistic noisy intermediate-scale quantum (NISQ) hardware. We first introduce an example variational algorithm and provide a reference implementation for a specific problem. We discuss in detail how we can conveniently access quantum computing resources through Amazon Braket and efficiently execute the algorithm on AWS.

For any enquiries, please contact: sterseba@amazon.com

Important Notes to Participants / Pre-requisites:

Participants will be given temporary access to an AWS-hosted development environment to follow the instructor-led demos on their laptop. No software needs to be installed prior or during the tutorial. A web browser and a standard wireless Internet connection are sufficient to participate.

Programme:

Time Session
09:30am – 10:15am Introducing the AWS ecosystem to access quantum computing resources and to execute variational quantum algorithms
10:15am – 10:45am Navigating the AWS management console and development environment for the tutorial
10:45am – 11:00am Tea Break
11:00am – 12:30pm Diving into workflows for the execution of variational quantum algorithms with CUDA-Q on AWS
Quantum Computing

Location: Melati Ballroom (Level 4)

Abstract: You will understand overall Supermicro product portfolio

Artificial Intelligence

Location: Orchid Ballroom (Level 4)

Location: Melati Ballroom (Level 4)

Abstract: Which problems allow for a quantum speedup, and which do not? This question lies at the heart of quantum information processing. Providing a definitive answer is challenging, as it connects deeply to unresolved questions in complexity theory. To make progress, complexity theory relies on conjectures such as P≠NP and the Strong Exponential Time Hypothesis, which suggest that for many computational problems, we have discovered algorithms that are asymptotically close to optimal in the worst case.

While these hypotheses are invaluable for algorithmic thinking and design, they don’t capture the whole picture. In practice, we are often interested in solving specific problem instances rather than finding universally efficient solutions. To bridge this gap, we need new tools and ideas.

In this talk, I will explore the landscape from both theoretical and practical perspectives. On the theoretical side, I will introduce the concept of “queasy instances”—problem instances that are quantum-easy but classically hard (classically queasy). On the practical side, I will discuss how these insights connect to advancements in quantum hardware development and co-design.

Quantum Computing

Location: Melati Ballroom (Level 4)

Abstract: High Performance Computing (HPC) has been a driving force behind scientific breakthroughs, cutting-edge research, and business innovation for decades. In recent years, the HPC landscape has undergone a transformative shift towards a more hybrid model, embracing diversity in node types (CPU, GPU), striking a balance between on-premises and cloud resources, and leveraging the power of artificial intelligence. As we look to the future, quantum computing is poised to further revolutionize the HPC realm. As a leader in cloud-based HPC and quantum computing solutions, AWS has been at the forefront of this transformation, investing heavily in making the transition seamless for its customers. In this presentation, we will explore the emerging trends shaping the HPC industry and AWS’s vision for empowering customers to harness the full potential of these cutting-edge technologies.

Artificial Intelligence
HPC

Location: Melati Ballroom (Level 4)

Abstract: As the world of artificial intelligence evolves at an unprecedented pace, join WEKA to decode the transformative shifts shaping the AI landscape. Our session will explore three pivotal dimensions:

  • AI Industry Trends: Delve into the rapid advancement of foundational models, from GPT to the rise of reasoning-enhanced architectures like LLM 2.0. Discover how emerging frameworks, post-transformer innovations, and efficiency breakthroughs redefine intelligence across industries.
  • AI Infrastructure Innovations: Navigate the scaling walls of AI infrastructure. Learn about advancements in GPU architectures, memory disaggregation, and latency-optimized systems like KV caching, which are driving exponential token throughput and enhanced inference efficiency.
  • The AI Token Economy and Business Models: Learn how AI’s expanding capabilities catalyze new economic paradigms. From synthetic data to token economics, explore how enterprises unlock value through scalable training, inference optimization, and cost-effective user engagement models.

Artificial Intelligence

Location: Melati Ballroom (Level 4)

Abstract: Details for this session are currently being finalized. Please stay tuned as we update the programme with exciting content over the coming days!

Location: Bayview Foyer (Level 4)

Location: Room P8 – Peony Jr 4412 (Level 4)

Abstract: This half-day tutorial introduces the concepts and building blocks behind campaign management. Campaign Management enables a group of scientists to manage many related datasets stored in multiple files, across multiple facilities as if it was in a single file / database.

We provide an organization concept for collecting metadata from all datasets in small files called Campaign Archives, which can be shared among project participants on their laptops, and which can easily facilitate discovery of content and pointers to the data location as well as remote access to the data by local tools as if data was local.

Remote data access to large HPC datasets is a daunting challenge, and downloading entire files is prohibitive, therefore, we discuss enabling technologies to download only the data values, to a user-defined accuracy, which are of interest. This includes:

a) Self-describing file formats (like ADIOS, HDF5) that enables collecting metadata to present the content in the campaign archive as well as facilitate fine-grained access to partial selections of single variables in a remote dataset;

b) Derived Variables, mathematical expressions to compute the derived variables, that contain only enough information for

c) Queries on Derived Variables that quickly filter the list of data blocks that need to be retrieved for satisfying the user’s interest; and

d) Data Reduction techniques that can save on network bandwidth and guarantee user-defined error bounds.

Our team is developing enabling technologies for campaign management, and initially integrating it with ADIOS which provides scalable file I/O and data streaming, and MGARD, which is an error-controlled compression and refactoring framework for scientific data. We will present applications from fusion and combustion, simulation and experimental data, to showcase the utility of campaign management.

We invite users and developers of large scale experimental and codes to learn how they can work with their large scale data on resources ranging from their laptops to exascale computers.

We also invite all interested researchers and developers to join our effort to enable this technology across a wide range of data technologies to create a comprehensive solution for bringing data to the computation.

HPC

Location: Room P9 – Peony Jr 4411 (Level 4)

Abstract: Recent advances in Deep Learning (DL) have led to many exciting challenges and opportunities. Modern DL frame-works enable high-performance training, inference, and deployment for various types of Deep Neural Networks (DNNs). This tutorial provides an overview of recent trends in DL and the role of cutting-edge hardware architectures and interconnects in moving the field forward. We present an overview of different DNN architectures, DL frameworks, and DL Training and Inference with special focus on parallelization strategies. We highlight challenges and opportunities for communication runtimes to exploit high-performance CPU / GPU architectures to efficiently support large-scale distributed training.

We also highlight some of our co-design efforts to utilize MPI for large-scale DNN training on cutting-edge CPU and GPU architectures available on modern HPC clusters. Throughout the tutorial, we include several hands-on exercises to enable attendees to gain first-hand experience of running distributed DL training and inference on a modern GPU cluster.

For any enquiries, please contact panda@cse.ohio-state.edu.

Workshop URL: https://nowlab.cse.ohio-state.edu/tutorials/scasia25-hidl/

Programme:

  • Introduction
    • The Past, Present, and Future of Artificial Intelligence (AI)
    • Brief History and Current/Future Trends of Machine Learning (ML) and Deep Learning (DL)
    • What are Deep Neural Networks?
  • Deep Learning Frameworks
  • Deep Neural Network Training
  • Distributed Data-Parallel Training
    • Basic Principles and Parallelization Strategies
    • Hands-on Exercises (Data Parallelism) using PyTorch and TensorFlow
  • Latest Trends in High-Performance Computing Architectures
    • HPC Hardware
    • Communication Middleware
  • Advanced Distributed Training
    • State-of-the-art approaches using CPUs and GPUs
    • Hands-on Exercises (Advanced Parallelism) using DeepSpeed
  • Distributed Inference Solutions
    • Overview of DL Inference
    • Case studies
  • Open Issues and Challenges
  • Conclusions and Final Q&A
Artificial Intelligence

Location: Room M2 – Melati Jr 4011 (Level 4)

Abstract: The rapid advancements in quantum computing pose a significant challenge to existing cryptographic systems, necessitating a proactive shift towards quantum-safe security measures. The “Quantum Cybersecurity” track will explore the evolving quantum threat landscape, the release of NIST’s Post-Quantum Cryptography (PQC) standards, and the critical need for crypto agility in adapting to these changes. Experts will share insights into the latest developments in quantum computing technology, its growing accessibility, and the urgency of quantum readiness. The track will feature a talk on Quantum Key Distribution (QKD) and Singapore’s national efforts under the National Quantum-Safe Network (NQSN), reinforcing the importance of secure communications in a post-quantum era.


Track Chair: Mr Jon Lau, Director, CISO Office & Scientific IT, A*STAR

[Invited Track]

Programme:

TimeSession
01:30pm – 02:00pmRegistration

02:00pm – 02:15pmOpening Remarks

– Mr Jon Lau, Vice Chair, Quantum Cybersecurity Workshop, SGTech; Director, Cybersecurity, A*STAR

02:15pm – 02:50pmSession 1: Navigating the New Cybersecurity Landscape with Crypto Agility

As quantum computing advances, organizations must urgently prepare for its potential impact on cybersecurity. With emerging quantum risks, such as “Harvest Now, Decrypt Later” attacks, and the vulnerabilities posed to traditional cryptographic systems, the need for post-quantum cryptography (PQC) adoption has never been more critical – especially following NIST’s certification of new PQC algorithms in August 2024.

Areas that will be covered include an exploration of quantum-related threats, the evolving regulatory landscape surrounding PQC adoption, and the challenges organizations face in securing Public Key Infrastructure (PKI), Internet of Things (IoT), Transport Layer Security (TLS), and Code Signing. Real-world case studies will be shared to demonstrate PQC implementation strategies and the importance of ecosystem collaboration and testing in ensuring enterprise security.

Together, let’s build a quantum safe future by becoming cryptographically agile.

– Mr Shaun Chen, AVP, APAC Sales Engineering, Thales

02:50pm – 03:25pmSession 2: Preparing for the Quantum Leap

Quantum computing offers transformative potential by enabling faster, more accurate, and innovative solutions across a wide variety of industries. Quantum computing also creates a significantly threat challenging the security of many current encryption methods we rely upon for digital trust. In this session we will review the state of the quantum computing landscape and the progress being made to deliver value to the marketplace. We will also explore the requirements for transitioning to a quantum safe future looking across technological, business and regulatory factors crucial for a successful migration program. The quantum future is approaching rapidly, and the time to prepare is now.

– Mr John Buselli, Offering Manager, Research, IBM

03:25pm – 04:00pmTea Break

04:00pm – 04:35pmSession 3: Quantum Readiness

As quantum computing nears a critical inflection point, traditional cybersecurity measures are increasingly at risk. In this talk, we will investigate how quantum technology is reshaping data security and explore strategies organizations can adopt to stay ahead in the post-quantum era. Drawing on practical examples and actionable recommendations, we will address the key steps required to transition to quantum-safe solutions and the hurdles organizations may face along the way.

Designed with both depth and accessibility in mind, this session welcomes a broad audience—from technical specialists to senior decision-makers—seeking to strengthen their security posture in the face of emerging quantum threats.

– Mr Alexey Bocharnikov, Director, Accenture

04:35pm – 05:10pm
Session 4: Quantum Key Distribution and Advancements

As quantum computing threatens traditional encryption, Quantum Key Distribution (QKD) emerges as a key solution to future-proof communications networks. This talk explores QKD’s security advantages, recent advancements in deployments, technology, and certifications, and its growing adoption worldwide.  With fiber-based QKD facing distance limitations, satellite-based QKD is the next step in enabling global quantum-safe communications. We’ll highlight key projects, challenges, and breakthroughs shaping a secure, quantum-resilient future.

– Dr Robert Bedington, Co-founder & CTO, SpeQtral

05:10pm
End of Session

Quantum Computing

Location: Room L1 – Lotus Jr (Level 4)

Track Chair: Mr Tommy Ng, NSCC

[Invited Track]

TimeSession
01:30PM – 01:50PMHPC for the Age of AI

Democratization of HPC – Supercomputing has traditionally been an extremely time-consuming and resource-intensive undertaking. Access to GPU resources, unified tools, and AI expertise can often be difficult to attain and optimize.

Many enterprises struggle with the complexity, effort, and time required to operate such infrastructure.

The future trends will address such issues.

– Mr SS Lim, CEO, PTC System

01:50PM – 02:10PMLeveraging Agentic AI for Advancing AI in Science

To harness Agentic AI for “AI for Science,” it is essential to leverage numerous AI expert models, enable high-speed inference, and integrate existing tools beyond AI models to construct logical processes. In this session, I will demonstrate how SambaNova can simplify the creation of Agentic AI workflows, featuring live demonstrations to showcase its capabilities.

– Dr Nan Hu, Senior Principal Solutions Engineer, SambaNova Systems

02:10PM – 02:30PMAMD Instinct GPU for HPC/AI

The presentation is to share the AMD GPU from Architecture and its development journey from CDNA1 to DCNA4 architecture. As to support AMD MI-Instinct GPU to be applicable for computing, AMD has been building a Software Development Environment for user since 2015. The software named ROCm, which the recently released was version 6.3 which provide additional features in supporting LLM, AI frame works and optimized Libraries which deliver more performance on the same work load as compared with earlier version 6.0. Frictionless migration path from current GPU environment to ROCm environment will be shared.

– Mr Robert Sheen, Sr. Manager DCGPU Business Development, AMD

02:30PM – 02:50PM Taking Control of Your AI Powerhouse

AI has incredible potential to change the way you engage with customers and drive value in your business, so now is the time to scale! However, as many customers have found, rapid scale can also lead to rapid problems, with escalating costs, data collection challenges and regulatory issues. Hitachi Vantara, a leader in the industrial AI space, has brought to market a solution portfolio that not only solves the technology hurdles, but addresses the afore mentioned challenges, and through that enabling you to accelerate to scalable value faster.

– Mr Matthew Hardman, CTO, Asia Pacific, Hitachi Vantara

02:50PM – 03:10PMArchitecting the intersection of Quantum Computing and Artificial Intelligence

This session explores innovated Dell architectures for enabling the convergence of quantum simulation, artificial intelligence, and traditional computation sciences. We discuss hardware architectures, and software strategies for seamless integration, and the offloading of workflows to hybrid-quantum platforms.

– Mr Andrew Underwood, APJC Field CTO, Dell Technologies

03:10PM – 03:30PMAI at Every Scale: Vertiv’s Approach to HPC AI Deployment Scenarios

Delve into the diverse scenarios and strategies of HPC/AI deployments, exploring how Vertiv’s innovative infrastructure solutions can be effectively implemented across various scenarios. From small-scale, edge scenarios to hyperscaler environments, this industry session discusses how Vertiv’s end-to-end portfolio — including advanced cooling systems and power distribution – addresses the unique challenges of each deployment.

Through real-world examples and practical insights, discover how Vertiv empowers organizations to achieve efficiency, scalability, and reliability in HPC/AI operations. Let our experts equip you in choosing and implementing the right solution for every stage of your HPC/AI journey.

– Mr Alvin Cheang, HPC / AI Business Director, Asia, Vertiv

03:30PM – 03:40PMTea Break
Tea Break is served at the Exhibition Room (Orchid Ballroom).

03:40PM – 04:00PMDeploying 10,000+ GPU Clusters with VAST Data: Large-scale Model Training and Inference

As AI workloads grow in complexity, deploying and managing large-scale GPU clusters efficiently is critical for HPC/AI environments. This talk explores how VAST Data enables seamless scaling to 10,000+ GPU clusters, optimizing model training and inference with unparalleled performance and stability. VAST is widely deployed in the largest GPU environments in the world, scaling to over 100,000 GPUs; insights from this experience will also be shared.

We’ll discuss key architectural considerations, real-world deployment insights, and best practices for maintaining resilience at scale — ensuring HPC teams can push the boundaries of AI innovation without compromising efficiency or reliability.

– Dr Subramanian Kartik, Vice President of Systems Engineering, VAST Data

04:00PM – 04:20PMStandards-based Data Platforms for HPC and AI

HPC and AI workloads demand high-performance access, often to multiple data sources in different storage systems and cloud instances. Traditional storage architectures struggle with data silos, proprietary file systems, and complex orchestration. Organizations need a unified global data platform that seamlessly spans edge, data centers, and cloud environments—without vendor lock-in.

This session explores how pNFS and automated data orchestration enable an open, standards-based approach to parallel file system performance for AI and HPC workloads. With recent Linux kernel advancements—including contributions from Hammerspace—organizations can now achieve linear scalability in IOPS and throughput using standard NFS clients.

Key topics include:
• Parallel Global File Systems with NFS 4.2 – How open standards now provide high-performance, scalable access to distributed data without proprietary clients.
• Automated Data Orchestration – How Hammerspace’s Flexible Files technology enables seamless, real-time data movement across storage types while ensuring continuous access.
• Optimized Workflows for AI & HPC – How a unified namespace accelerates data-driven workloads and ensures compute power is fully utilized.

– Mr Floyd Christofferson, Vice President of Product Marketing, Hammerspace

04:20PM – 04:40PMQuantum and Supercomputing Threats: Check Point’s Strategy for Future-Proof Security

The rapid advancements in quantum computing pose both opportunities and threats to cybersecurity. Classical encryption methods, including RSA and ECC, are at risk of being rendered obsolete by quantum algorithms like Shor’s algorithm. Check Point Quantum Cryptography presents a forward-thinking approach to securing digital communications against these emerging threats. By leveraging post-quantum cryptographic algorithms, Check Point aims to ensure data integrity and confidentiality in a quantum-powered future. This presentation will explore the fundamentals of quantum cryptography, discuss the challenges of implementation, and outline Check Point’s roadmap for securing enterprises against quantum threats. The audience will gain insight into how organizations can future-proof their security infrastructures and adopt a quantum-resistant cybersecurity posture.

– Mr Abhishek Kumar Singh, Sales Engineer Manager, APAC, CheckPoint Software Technologies

04:40PM – 05:00PMAccelerating Scientific Discovery with Azure Quantum

Azure Quantum is revolutionizing scientific discovery by integrating quantum computing, artificial intelligence, and high-performance computing. This presentation highlights the potential of quantum computing to address complex global challenges, particularly in chemical, materials, and drug discovery. It showcases the Azure Quantum platform’s capabilities, including the new Copilot experience, which enhances productivity for computational chemists. The presentation also emphasizes the importance of reliable logical qubits for practical quantum advantage and Microsoft’s unique approach to achieving quantum scale. Join us on this journey to unlock breakthrough value and accelerate innovation cycles with Azure Quantum.

– Mr Ujjwal Kumar, Principal Architect, Office of CTO, Microsoft Asia, Microsoft

05:00PM – 05:20PMXeon Processor Designed for HPC – Intel Xeon 6900 Series
Discover what makes Intel Xeon 6900 series processors exceptional for use in high performance computing. This includes thermal design, memory performance and Peak efficiency.

– Mr Otto Chow, Technical Sales, Sales & Marketing (APAC), Intel

05:20PM – 05:40PMQuantum-centric Supercomputing: A New Perspective on Computing
As quantum systems have recently become more capable and mature, the integration of quantum systems in HPC datacenters has become more and more frequent. In this talk I will present some of our efforts along our vision of integrating quantum computing in HPC environments and will show how users can already start extracting value from quantum computers before the maturity of quantum error correction. I will also show how classical supercomputing can have a critical role at different stages of a quantum computation and how classical developers can already actively engage with heterogeneous workflows in integrated quantum and classical systems.

– Dr Antonio Córcoles, Head of Quantum + HPC, Principal Research Scientist, IBM

05:40PM – 06:00PMAccelerating Scientific discovery with HPC & AI

AI and HPC are transforming scientific discovery by speeding up research processes. These technologies are applied in fields like chemistry and materials science, enabling rapid screening and accurate predictions. The integration of AI and HPC has significantly reduced discovery times, allowing scientists to achieve breakthroughs in weeks instead of years, addressing urgent challenges in various domains.

– Mr Nikkesh Siva, Sr Specialist, Azure HPC, Microsoft

Artificial Intelligence

 Location: Room M3 – Melati Jr 4111 (Level 4)

Abstract: The introduction of the first Diversity and Inclusion (D&I) track at Supercomputing Asia (SCAsia) in 2022 marked a pivotal step towards recognising the essential role of diversity, equity, and inclusion within the high-performance computing (HPC) community and beyond. This inaugural track highlighted the “Why”—why diversity, equity, and inclusion matter, illustrating the benefits they provide to organisations, research endeavours, and individuals by fostering innovation, creativity, and improved outcomes.

In subsequent years, the focus shifted to the “How”—the strategies, policies, and actionable steps necessary to cultivate a truly diverse and inclusive environment, one that not only embraces difference but also leverages it to drive progress and equity.

At SCAsia 2025, we invite you to explore the “What”—what diversity, equity, and inclusion achieve in practical, measurable terms for researchers, organisations, and the broader HPC community. This track will delve into the tangible dividends emerging from diverse and inclusive environments, supported by data and real-world examples demonstrating how diverse teams outperform their counterparts in creativity, problem-solving, and productivity.

Join us to learn how a commitment to diversity and inclusion not only creates a more equitable and supportive workplace but also delivers exponential returns on investment—fostering groundbreaking research, driving innovation, and achieving sustained success in HPC and beyond.


Track Co-Chair:

  • Ms Aditi Subramanya
  • Ms Jana Makar
  • Dr Emily Barker

[Invited Track]

Programme:

TimeSession
01:30pm – 02:00pmOpening, Building Competitive Advantage in HPC

At SCAsia 2025, we invite you to explore the “What”—what diversity, equity, and inclusion achieve in practical, measurable terms for researchers, organizations, and the broader HPC community. This track will delve into the tangible successes emerging from diverse and inclusive environments, supported by data and real-world examples demonstrating how diverse teams outperform their counterparts in creativity, problem-solving, and productivity.

Join us to learn how a commitment to diversity and inclusion not only creates a more equitable and supportive workplace but also delivers exponential returns on investment—fostering groundbreaking research, driving innovation, and achieving sustained success in HPC and beyond.

– Ms Aditi Subramanya, Partner Engagement Manager, Pawsey Supercomputing Research Centre

02:00pm – 02:30pmAdvancing DEI in Research Software through the Research Software Engineering (RSEng) Asia Association

Research software, including essential code, algorithms, and workflows developed in research, is integral to scientific discovery. The Research Software Engineering (RSEng) Asia Association, founded in 2021, promotes formal RSEng roles across Asia and supports a community of professionals in research software and adjacent HPC fields. While RSEs and the HPC community members are distinct, they often collaborate on advancing computing-intensive research. Through global partnerships, annual RSE Asia Australia unconferences, and DEI-focused initiatives, the association connects participants from across Asia and globally, empowers the RSE and HPC communities with diverse insights, fosters innovative problem-solving, and makes inclusive contributions to science.

This talk will share insights into the RSE Asia Association’s journey, highlighting how DEI-focused initiatives are bridging gaps, fostering collaborations, and creating a more inclusive environment that drives progress in research software engineering across Asia.

– Ms Jyoti Bhogal, Co-founder and Lead, Research Software Engineering (RSE) Asia Association

02:30pm – 03:00pmDiverse Pathways to a Career in Technology

The road into tech is rarely linear, and my own journey is proof of the diverse opportunities available. From working with researchers and startups to optimising technology for efficiency, and eventually moving into government and IT to make science work for us, my career has been shaped by curiosity and a drive to enable others.

My entry into high-performance computing (HPC) was a natural evolution of this path—first through geoscience, before realising the potential to support broader research communities. Along the way, I’ve had the privilege of working with incredible leaders, who have carved their own unconventional path in tech, proving that there’s no single formula for success.

This talk will explore real-life examples of women who have forged unique careers in technology, highlighting the different ways we can contribute, innovate, and lead. By sharing these experiences, I hope to inspire others to embrace diverse pathways and seize opportunities that align with their passions.

– Dr Carina Kemp, Principal Business Development Manager, Research, Amazon Web Services

03:00pm – 03:30pmDeveloping a Comprehensive Framework for Detecting Social Diversity in Large Language Models: A Computational Social Science Approach

LLMs are increasingly important in HPC applications, but concerns persist regarding their biases in shaping societal views on gender, aging, and minority groups. These biases, often arising from training data and processes, are typically handled by engineers through filtering or blocking mechanisms. However, research suggests that excessive filtering may diminish social diversity and inclusivity, reducing the breadth of opinions in LLM outputs. This study proposes a framework for enabling social scientists to assess how well LLMs reflect diversity and inclusivity. The framework includes confirming the distribution of key metrics from traditional social diversity surveys, generating virtual respondents that mirror real-world demographic and personality traits, and verifying response variability across different LLM personas. Statistical methods will then be applied to compare the output distributions of LLMs with actual survey data. To validate this approach, surveys from Taiwan and other Asian countries will be used as case studies. This framework is for evaluating whether LLM-generated content reflects real-world cultural diversity and for developing practical tools that foster interdisciplinary collaboration, allowing social scientists to assess and test LLM diversity more effectively.

– Dr. Chia-lee Yang, Principle Engineer, National Center for High-Performance Computing, Taiwan
– Dr. Yen-Jen Lin, Researcher, National Center for High-Performance Computing, Taiwan

03:30pm – 04:00pmTea Break

04:00pm – 04:30pmDiversity in Numbers: The Significance of Partnerships in Driving Sustainable EDI Programmes

Meaningful EDI changes occur only with sustained effort, but organisations often face roadblocks such as lack of resources and subject expertise. This is why holistic programming is at the heart of AMD Singapore’s EDI strategy. The company’s EDI roadmap is not only anchored by internal initiatives, but also through consistent programs with organisations such as Daughters of Tomorrow and United Women Singapore, as well as expansive partnerships with industry partners such as SWE@SG and SSIA. The keynote will delve into the company’s learnings on the significance of industry partnerships and insights into how different organizations can approach their own efforts.

– Ms Pei Fern Ng, Senior Manager of Silicon Design Engineering, AMD

04:30pm – 06:00pmWHPC+ Australasia: From Inclusion to Impact

The WHPC+ Australasia chapter was founded in 2019 and since then we have built a great community. In our next phase the chapter is focussing on how to support our partners to build diverse teams which will allow them to drive growth within their fields. The chapter committee will facilitate a discussion to investiage new programs to help our partners in their diversity and inclusion endeavours.

– Dr Emily Barker, Senior HPC Engineer, The University of Western Australia

06:00pmEvent Close

Location: Room O4 – Orchid Jr 4212 (Level 4)

Abstract: The GRP (Global Research Platform) was established to create a  worldwide Software Defined, Globally Distributed, Multi-Domain Computational Science Environment  environment for data-intensive research including remote access to HPC facilities, storage and computing and AI development resources, including GPU farms and  in particular advanced networking capabilities for large-scale data transfers 

This session will cover the recent advances in the deployment of the GRP, the Asia Pacific Research Platform (APRP) as well as the US and Korean efforts under the NRP and KRP initiatives respectively.

Advances in international communications with the advent of 400Gbps and soon beyond will be covered as well as the announcement of the 2025 Data Mover Challenge (DMC25) which will again allow to judge the progress in data transmission speeds and throughput both on subsea and satellite.

Track Co-Chair:

  • Assoc Prof Francis Lee Bu Sung, President, SingAREN
  • Prof Joe Mambretti, Director, International Center for Advanced Internet Research, Northwestern University, StarLight International/National Communications Exchange Facility
  • Dr Jeonghoon Moon, Principal Researcher, Korea Institute of Science and Technology Information (KISTI)

[Invited Track]

Programme:

TimeSession
01:30pm – 01:50pmOpening Address

– Prof Joe Mambretti, Director, International Center for Advanced Internet Research, Northwestern University, StarLight International/National Communications Exchange Facility
– A/Prof Francis Lee Bu Sung, President, SingAREN

01:50pm – 02:10pmSingAREN Open Exchange and GRP

SingAREN Open Exchange (SOE) has become an important POP for both Singapore researchers and international partners. In 2021, SOE became a distributed POP providing more connection options and resilience. Going beyond connectivity, SOE has also provided services such as DB mirror, commonly used db in EU and USA are periodically copied to a server in Singapore and shared. Thus, helping researchers in the region to get data more efficiently and improving their research productivity. SOE is also supported major big science project such as LHCONE project. This talk will share some of the projects carried out at SOE as well as future plans to support researchers.

– A/Prof Francis Lee Bu Sung, President, SingAREN

02:10pm – 02:30pmFrom Spectrum to Quantum: ESnet’s contributions to building the next-generation research platform

As scientific research continues to evolve, the demand for robust, high-capacity, and intelligent networking solutions has never been greater. The Energy Sciences Network (ESnet) is at the forefront of this technological revolution, pioneering advancements that are shaping the future of data-intensive science. This talk will explore how ESnet is building the next generation of technologies designed to handle the immense and growing data streams from cutting-edge scientific instruments. We will delve into the development of advanced testbeds for smart grid applications and that foster innovative network and systems research, providing a platform for groundbreaking discoveries. The talk will also cover ESnet’s collaborative efforts with the R & E community to expand fiber spectrum capacity across the Atlantic, addressing the escalating data rates required by global scientific collaborations. Furthermore, we will discuss the integration of artificial intelligence systems to optimize network performance and the pioneering research in quantum networking that promises to transform how quantum computing might scale.

– Mr Inder Monga, Executive Director, Energy Sciences Network (ESnet)

02:30pm – 02:50pmThe Asia Pacific Research Platform in APAN & KRP: An Overview

This presentation is the current status and future plan of the APRP(Asia Pacific Research Platform) which is a collaborative effort among research/education networks and distributed computing resources in the Asia Pacific region.

Its goal is to facilitate collaboration among researchers and institutions by providing a high-performance, reliable, and secure network infrastructure and also distributed CPU/GPU-based computing.

The APRP project focuses on several key areas, including high-speed network connectivity, data storage and management, and advanced computing capabilities. The APRP network infrastructure and computing resources are designed to enable data-intensive research in a range of fields, including bioinformatics, genomics, climate science, earth science, AI science, and particle physics.

The project also includes efforts to develop human capacity and promote collaboration among researchers in the Asia Pacific region. This includes training programs, workshops, and other initiatives to help researchers build the skills and knowledge needed to conduct research using the APRP infrastructure.

Overall, the AP-RP project aims to support the development of a strong and collaborative research community in the Asia Pacific region and to position the region as a leader in global research and innovation.

– Dr Jeonghoon Moon, Principal Researcher, Korea Institute of Science and Technology Information (KISTI)

02:50pm – 03:10pmAustralia Research Platform and the SKA

This presentation will cover the infrastructure and planning underway in Australia to support a national program to use Genomic Sequencing to match cancer patients with new drugs and personalised treatments.

This initiative is part of a pathfinder to the longer term implementation of a National Clinical Genomic program to securely deliver Genomic sequencing as a regular pathology element to health care providers and individuals to predict and treat diseases and to optimise the planning in hospital settings to respond to anticipated disease trends.

The significant data volumes from regular sequencing presents interesting challenges in the development of secure networks, secure data storage and secure access methods.

– Mr Andrew Howard, Associate Director, National Computational Infrastructure (NCI), Canberra Australia

03:10pm – 03:30pmOpen Data eXchange and Research Platform in Korea

Recently KISTI is building a new Open Data eXchange in Korea to enable high performance data transfer among several networks and data centers. In this talk, I’d like to introduce the Korea Open Data eXchange and but also an AI research platform on KREONET.

– Dr Chanjin Park, Head, KREONET Services, Korea Institute of Science and Technology Information (KISTI)

03:30pm – 04:00pmTea Break

04:00pm – 04:15pmESnet High-Touch Telemetry – Finding a needle in a needle-stack

The Energy Sciences Network (ESnet) is the high performance network of the US Department of Energy Office of Science (DOE SC). DOE SC is the largest supporter of basic research in the physical sciences in the US, consisting of 6 programs (i.e., Advance Scientific Computing Research (ASCR), Basic Energy Sciences (BES), Biological and Environmental Research (BER), Fusion Energy Sciences (FES), High Energy Physics (HEP), and Nuclear Physics) that support a variety of experiments across its 28 facilities, including international partner collaborations. Over its 36-year span, ESnet has evolved to meet the requirements of ever changing scientific workflows. This presentation will provide a brief history of ESnet’s generational changes and highlight the capabilities of its current generation network ESnet6.

– Mr Chin Guok, Chief Technology Officer, Energy Sciences Network (ESnet)

04:15pm – 04:30pmAccelerated ONION based on DTN experience

He has worked on the administration and management of supercomputing systems at Osaka University. Also, through the international collaboration, he has experienced of using DTN. This talk will introduce one of science DMG project in Osaka University based on the experience of using DTN.

– Dr Susumu Date, Associate Professor of the Cybermedia Center, Osaka University

04:30pm – 04:45pmSCinet, NRE Program, and OFCnet

The Research and Development team at Ciena has a long established reputation for creating world class networking products and solutions. Many of the company’s technology advancements in computer communications technologies originated though collaboration with universities and partnerships with the global R&E community. For over two decades, as active collaborators and participants in R&E events, NRP, GRP meetings and research industry trade shows, Ciena has brought it’s emerging product technologies to this community to create shared success. In this talk, Mr. Wilson will present a retrospective summery from SC24 (America) as exhibited via the SCinet facility, and Network Research Experiment focus demonstrations. He will also present OFCnet, a live demo network feature of the Optical Fibre Conventions exhibition being held in San Francisco, California March 30-April 3, 2025.

– Mr Rodney Wilson, Chief Technologist, External Research, Ciena Corporation, R&D Labs, Ottawa Canada

04:45pm – 05:00pmSupporting International Partnerships in Science: The Role of International Networks at Indiana University

This presentation by International Networks at Indiana University will highlight how science drivers including international science collaborations and distributed big science instruments are shaping the development of infrastructure services, architecture, and design, including initial efforts to upgrade TransPAC to 400G. The discussion will then showcase collaborative efforts between IN@IU and other research communities, including support for experiments at the annual Supercomputing Conference, SC Asia, and the data mover challenge (DMC). The aim of the presentation is to demonstrate the importance of international collaboration in enabling cutting-edge research and showing the role that International Networks and our partner organizations in Asia play by providing both infrastructure and engineering support.

– Ms Brenna Meade, Network Engineer, International Networks at Indiana University

05:00pm – 05:15pmThe Global Research Platform: An Overview

This presentation provides an overview of the Global Research Platform (GRP), an international scientific collaboration creating innovative advanced ubiquitous services integrating resources around the globe at speeds of 100s of Gbps and terabits per second, especially for large-scale data-intensive science research. GRP focuses on design, implementation, and operation strategies for next-generation distributed services and infrastructure to facilitate high-performance data gathering, analytics, transport, computing, and storage among multiple science sites world-wide. The GRP partners are collaborating to customize international services, fabrics, and distributed cyberinfrastructure to support optimal data-intensive scientific workflows. Development areas include: a) Next-Generation Research Platforms; b) Orchestration Among Multiple Domains; c) Large-Scale Data Transport; d) High-Fidelity Data Flow Monitoring, Visualization, Analytics, Diagnostics, Event Correlation, AI/ML/DL; e) Data-Intensive Science and Programmable Networking; f) Networking and Communication Service Automation; and, g) International Testbeds for Data-Intensive Science.

– Prof Joe Mambretti, Director, International Center for Advanced Internet Research, Northwestern University, StarLight International/National Communications Exchange Facility

05:15pm – 05:30pmFiber Sensing using State of Polarization

In this presentation we will highlight the different challenges that Fiber sensing using SOP (State Of Polarization) is facing.

We will look at the technology and how we can overcome the limitations.

We will also describe the SMART Cable technology that is available in the submarine cables as well as Digital acoustic type of fiber sensing and why SOP is looking so promising.

– Mr Marc Lyonnais, Director, External Research, Ciena

05:30pm – 06:00pmReflection on DMC

Over the past century, our Research and Education networks have upgraded from 1G to 10G, with 100G becoming the dominant international connection speed. During this period, the development of large data production from instruments like the LHC, Copernicus, genomic sequencers, Electron microscopes, and new telescopes like LSST and SKA has necessitated the development of new tools and services to support rapid data capture and distribution.

The international Data Mover Challenge (DMC), a key event of the SCA conference series, is a competition that is run once every 2 years and it aims to bring together experts from industry and academia in a bid to test their software and solutions for transferring huge amounts of research data. DMC seeks to challenge international teams to come up with the most advanced and innovative solutions for data transfer across servers located in various countries that are connected by 100Gbps international research and education networks.

This presentation will discuss how the DMC network has expanded, lessons learnt from past Challenges and an overview of the tools and services developed by contestants.

– Mr Andrew Howard, Associate Director, National Computational Infrastructure (NCI), Canberra Australia

06:00pmClosing

– Prof Joe Mambretti, Director, International Center for Advanced Internet Research, Northwestern University, StarLight International/National Communications Exchange Facility

– A/Prof Francis Lee Bu Sung, President, SingAREN

HPC

Location: Room O5 – Orchid Jr 4211 (Level 4)

Abstract: The Centre for Development of Advanced Computing (C-DAC), India, is leading a transformative workshop at Supercomputing Asia 2025 (SCA25), focusing on the advancement of High-Performance Computing (HPC), Artificial Intelligence (AI), and Quantum Computing under India’s National Supercomputing Mission (NSM). This session will highlight India’s efforts in indigenous HPC development, including cutting-edge system software, hybrid computing models, and RISC-V-based frameworks.

The workshop will delve into C-DAC’s innovations in HPC-AI, HPC-Quantum convergence, and HPC capacity building. Additionally, a high-level panel discussion featuring government, and industry experts will explore future directions, cross-sector collaborations, and global engagement opportunities in supercomputing.

This session will provide deep insights into India’s growing HPC ecosystem, fostering discussions on technological advancements, skill development, and international cooperation to drive the next generation of supercomputing solutions.


Track Chair: Dr S D Sudarsan, Executive Director, C-DAC Bangalore

[Invited Track]

Programme:

TimeSession
01:30pm – 03:30pmOpening and Introduction

– Dr Mohammed Misbahuddin, Scientist ‘F’ & Group Head, C-DAC Bangalore

India’s National Supercomputing Mission: Building a Computational Ecosystem

The National Supercomputing Mission (NSM) is a visionary initiative by the Government of India, aimed at enhancing the country’s High-Performance Computing (HPC) ecosystem. Led by MeitY and DST, and executed by C-DAC and IISc, NSM focuses on deploying indigenous supercomputers, developing HPC software solutions, and fostering skilled manpower. With the successful deployment of PARAM-series supercomputers across academic and research institutions, NSM is accelerating advancements in AI, scientific research, and industrial applications. This session will showcase NSM’s impact, future roadmap, and India’s strategic position in the global HPC landscape

– Ms Sunita Verma, Scientist ‘G’ & Group Coordinator, R&D in IT, Mininstry of Electonics & IT (MeitY), Govt. of India

The Centre for Development of Advanced Computing (C-DAC) has been at the forefront of India’s supercomputing revolution, driving innovations in High-Performance Computing (HPC) through the National Supercomputing Mission (NSM). From pioneering the PARAM series to developing indigenous HPC software, AI-HPC frameworks, and RISC-V-based computing, C-DAC has significantly contributed to India’s self-reliance in supercomputing. Its advanced HPC infrastructure powers scientific research, industry applications, and national development initiatives. This session will highlight C-DAC’s supercomputing journey, key milestones, and future roadmap in making India a global leader in HPC and AI-driven computing.

– Mr Magesh E, Director General, C-DAC

The Centre for Development of Advanced Computing (C-DAC) is leading India’s efforts in Quantum Computing and Quantum Technologies, focusing on quantum algorithms, simulators, cryptography, and quantum-HPC integration. Under the National Supercomputing Mission (NSM) and national quantum initiatives, C-DAC is developing indigenous quantum software frameworks, hybrid quantum-classical computing models, and training programs to build a skilled workforce. With advancements in quantum simulation and secure communication, C-DAC aims to position India at the forefront of the global quantum revolution. This session will showcase C-DAC’s quantum journey, breakthroughs, and future roadmap for scalable quantum solutions.

– Dr S D Sudarsan, Executive Director, C-DAC Bangalore

C-DAC’s System Software Suite for HPC: Enhancing Computational Efficiency

The Centre for Development of Advanced Computing (C-DAC) has been instrumental in developing indigenous HPC system software solutions to enhance the performance, scalability, and efficiency of India’s High-Performance Computing (HPC) infrastructure. C-DAC’s software stack includes parallel development environments, resource management tools, system monitoring frameworks, and AI-integrated HPC solutions. These innovations power PARAM-series supercomputers and enable seamless integration of AI, cloud, and Quantum Computing. This session will highlight C-DAC’s HPC software advancements, real-world applications, and future roadmap for building a robust and self-reliant HPC ecosystem in India.

– Ms Deepika H V, Scientist ‘F’, C-DAC Bangalore

03:30pm – 04:00pmTea Break

04:00pm – 06:00pmHybrid Computing: Combining HPC, AI, and Quantum for Next-Gen Solutions

The Centre for Development of Advanced Computing (C-DAC) is driving HPC-AI convergence to accelerate India’s progress in AI-driven research, scientific computing, and industrial applications. As part of the IndiaAI initiative, C-DAC is developing AI-optimized HPC architectures, deep learning frameworks, and large-scale AI models to support national priorities in healthcare, climate science, and security. By integrating indigenous supercomputing with AI accelerators, C-DAC is enabling faster, more efficient AI processing for cutting-edge applications. This session will highlight C-DAC’s contributions to HPC-AI, IndiaAI, and the future of AI-powered supercomputing

– Mr Ramesh Naidu, Scientist ‘F’ (Artificial Intelligence), C-DAC Bangalore

The Centre for Development of Advanced Computing (C-DAC) is spearheading India’s Quantum Computing initiatives, focusing on quantum algorithms, quantum-HPC integration, cryptographic security, and quantum simulations. As part of national missions, C-DAC is developing indigenous quantum software frameworks, hybrid quantum-classical computing platforms, and training programs to accelerate India’s capabilities in Quantum Technologies. Ongoing projects include quantum key distribution (QKD), quantum simulators, and AI-driven quantum computing research. This session will showcase C-DAC’s key quantum initiatives, technological breakthroughs, and strategic roadmap in making India a leader in Quantum Computing and secure communication.

– Dr Asvija B, Scientist ‘F’ & Mission Director (Quantum Technologies), C-DAC

Capacity Building: Creating the manpower for HPC

Capacity building in High-Performance Computing (HPC), Artificial Intelligence (AI), and Quantum Computing is essential to bridge the gap between academic learning and industry demands. The Centre for HPC Upskilling and Knowledge-sharing (C-HUK), an initiative by C-DAC, plays a pivotal role in equipping researchers, students, and professionals with advanced computational skills. Through structured online courses, faculty development programs, hands-on training, and industry collaborations, C-HUK fosters a skilled workforce ready for next-generation computing challenges. This session will highlight C-HUK’s impact, innovative training methodologies, and future roadmap in strengthening India’s HPC talent pipeline.

– Dr Mohammed Misbahuddin, Scientist ‘F’ & Group Head, C-DAC Bangalore

Panel Discussion: Shaping the Future of HPC in Asia

– Dr S D Sudarsan, Executive Director, C-DAC Bangalore

– Mr Parminder Singh, Associate Vice President, VVDN Technologies

– Dr Manish Modani, Principal Solution Architect, NVIDIA

Closing

Location: Orchid Ballroom (Level 4)

*Note: Programme is subject to changes.