NEWS

KAUST-GSAI Joint Workshop on Advances in AI

Date:2021-11-19 Visits:




November 24-25 2021 09:00-13:00AM(UTC+3)

Host by

King Abdullah University of Science and Technology, Saudi Arabia

Gaoling School of Artificial Intelligence, Renmin University, China


AGENDA (tentative)

DAY1- Wednesday, November 24th , 2021

ZOOM webinars ID:979 1473 5776



Moderator  Prof. Xin Gao, Professor, Computer Science, KAUST

09:05-09:15   Opening Remarks

Prof. Jürgen Schmidhuber, Director, KAUST AI Initiative

09:15-09:30    Introduction to KAUST AI Initiative

Prof. Bernard Ghanem, Deputy Director, KAUST AI Initiative

09:30-09:55    Vision of GSAI

Prof. Ji-Rong Wen, Executive Dean of Gaoling School of Artificial Intelligence,RUC, China

09:55-10:15    Research at the Image and Video Understanding Lab (IVUL)

Prof. Bernard Ghanem, Deputy Director, KAUST AI Initiative

Abstract: In this talk, I will give a quick and selective overview of research done in the Image and Video Understanding Lab (IVUL) at KAUST, with emphasis on three main research themes: large-scale video understanding (e.g. activity detection in untrimmed video and language video grounding), visual computing for automated navigation (e.g. 3D classification/segmentation/detection), and fundamentals (e.g. deep neural network robustness and certification).

5 mins Q&A

10:20-10:40    Vividly Sensing the World from Sight and Sound

Prof. Di Hu, Tenure- Track Assistant Professor, GSAI

Abstract: Sight and sound are two of the most important senses for human perception. From cognitive perspective, the visual and auditory information is actually slightly discrepant, but the percept is unified with multi-sensory integration. What's more, when there are multiple input senses, human reactions usually perform more exactly or efficiently than single sense. Inspired by this, for computational models, our community has begun to explore marrying computer vision with audition, and targets to address some essential problems of audio-visual learning then further develops them into interesting and worthwhile tasks. In this talk, I will give a short-review of recent progress in learning from both visual and auditory data, especially audio-visual self-supervised learning and its generalization in scene understanding.

5 mins Q&A

10:45-11:05     Novel Segmentation and Quantification Methods for CT-based COVID-19 Diagnosis and Prognosis

Prof. Xin Gao, Professor, Computer Science, KAUST

Abstract: COVID-19 has caused a global pandemic and become the most urgent threat to the entire world. Tremendous efforts and resources have been invested in developing diagnosis, yet the pandemic is still undergoing.

Despite the various, urgent advances in developing artificial intelligence (AI)-based computer-aided systems for CT-based COVID-19 diagnosis, most of the existing methods can only perform classification, whereas the state-of-the-art segmentation methods require high levels of human intervention. In this talk, I will introduce our work on a fully-automatic, rapid, accurate, and machine-agnostic method that can segment and quantify the infection regions on CT scans from different sources. Our method is founded upon three innovations: 1) an embedding method that projects any arbitrary CT scan to a same, standard space, so that the trained model becomes robust and generalizable; 2) the first CT scan simulator for COVID-19, by fitting the dynamic change of real patients’ data measured at different time points, which greatly alleviates the data scarcity issue; and 3) a novel deep learning algorithm to solve the large-scene-small-object problem, which decomposes the 3D segmentation problem into three 2D ones, and thus reduces the model complexity by an order of magnitude and, at the same time, significantly improves the segmentation accuracy. Comprehensive experimental results over multi-country, multi-hospital, and multi-machine datasets demonstrate the superior performance of our method over the existing ones and suggest its important application value in combating the disease. I will finally introduce our ongoing work on developing full interpretable AI models to “see the unseen” from the CT scans of COVID-19 survivors to diagnose the long-term sequela of COVID-19.

5 mins Q&A

11:10-11:30    Gromov-Wasserstein Learning for Graph Matching, Partitioning, and Embedding

Prof. Hongteng Xu, Tenure- Track Associate Professor, GSAI

Abstract: Many real-world data like PPI networks and biological molecules are structured data, which are represented as (attributed) graphs. From the viewpoint of machine learning, the tasks focusing on these structured data, such as network alignment and molecule analysis, can often be formulated as graph matching, partitioning, and node embedding problems. In this talk, I will introduce a novel machine learning framework called Gromov-Wasserstein Learning (GWL) — a new systematic solution I proposed for structured data analysis. The GWL framework is based on a pseudo-metric on graphs called Gromov-Wasserstein (GW) discrepancy. Given two arbitrary graphs, their GW discrepancy measures how the edges in a graph compare to those in the other graph, which corresponds to learning an optimal transport matching one graph's nodes with those of the other. Besides graph matching, this framework is applicable for graph partitioning and node embedding. A scalable GWL method is developed, combining a recursive partition mechanism with a proximal point algorithm.

5 mins Q&A

11:35-11:55    Fast Communication for Distributed Deep Learning

Prof. Panos Kalnis, Professor, Computer Science, KAUST

Abstract:Network communication is a major bottleneck in large-scale Distributed Deep Learning. The Computer Systems group at KAUST is building systems to minimize the communication cost and reduce the end-to-end training time. This talk will introduce four such systems: (i) SwitchML, an in-network aggregation approach that utilizes programmable switches (e.g., Barefoot Tofino) to average and broadcast the stochastic gradients among the distributed workers; (ii) GRACE, a programming framework that simplifies the implementation of compressed communication (both quantization and sparsification) on TensorFlow and PyTorch; (iii) OmniReduce, a novel implementation of AllReduce, optimized for sparse tensors in very fast networks; and (iv) DeepReduce, a Bloom filter-based compression method for sparse tensors, suitable for WANs and large-scale Federated Learning.

5 mins Q&A

12:00-12:20    Data-driven Discovery of Physics: When Deep Learning Meets Symbolic Reasoning

Prof. Hao Sun, Tenured Associate Professor, GSAI

Abstract: Harnessing data to model and discover complex physical systems has become a critical scientific problem in many science and engineering areas. The state-of-the-art advances of AI (in particular deep learning, thanks to its rich representations for learning complex nonlinear functions) have great potential to tackle this challenge, but in general (i) rely on a large amount of rich data to train a robust model, (ii) have generalization and extrapolation issues, and (iii) lack of interpretability and explainability, with little physical meaning. To bridge the knowledge gaps between AI and complex physical systems in the sparse/small data regime, this talk will introduce the integration of bottom-up (data-driven) and top-down (physics-based) processes through a Physics-informed Learning and Reasoning paradigm for discovery of discrete and continuous dynamical systems. This talk will discuss several methods that fuse deep learning and symbolic reasoning for data-driven discovery of mathematical equations (e.g., nonlinear ODEs/PDEs) that govern the behavior of complex physical systems, e.g., chaotic systems, reaction-diffusion processes, wave propagation, fluid flows, etc.

5 mins Q&A

DAY2- Thursday, November 25th, 2021

ZOOM webinars ID:974 3063 6148

Moderator    Prof. Hongteng Xu, Tenure-Track Associate Professor, GSAI

09:05-09:25    BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation

Prof. Zhewei Wei, Tenured Associate Professor, GSAI

Abstract: Many representative graph neural networks, e.g., GPRGNN and ChebNet, approximate graph convolutions with graph spectral filters. However, existing work either applies predefined filter weights or learns them without necessary constraints, which may lead to oversimplified or ill-posed filters. To overcome these issues, we propose BernNet, a novel graph neural network with theoretical support that provides a simple but effective scheme for designing and learning arbitrary graph spectral filters. In particular, for any filter over the normalized Laplacian spectrum of a graph, our BernNet estimates it by an order-K Bernstein polynomial approximation and designs its spectral property by setting the coefficients of the Bernstein basis. Moreover, we can learn the coefficients (and the corresponding filter weights) based on observed graphs and their associated signals and thus achieve the BernNet specialized for the data. Our experiments demonstrate that BernNet can learn arbitrary spectral filters, including complicated band-rejection and comb filters, and it achieves superior performance in real-world graph modeling tasks.

5 mins Q&A

09:30-09:50    Artificial Intelligence(AI) Inspired Designs of Acoustic and Photonic Systems

Prof. Ying Wu, Associate Professor, Applied Mathematics and Computational Science, KAUST

Abstract: In this talk, I will introduce our recent progress on how deep learning models inspired us in solving real world problems with different applications. In particular, I will focus on deterministic and probabilistic deep learning models for inverse design of broadband acoustic cloak, which can conceal an objectfrom incident acoustic waves over a broad frequency range; and a deep learning framework based on fully connected neural network to design plasmonic metascreen for efficient light trapping in ultrathin silicon solar cells.

5 mins Q&A

9:55-10:15   Optimal Pricing of Information

Prof. Weiran Shen, Tenure-Track Assistant Professor, GSAI

Abstract: A decision maker looks to take an active action (e.g., purchase some goods or make an investment). The payoff of this active action depends on his own private type as well as a random and unknown state of nature. To decide between this active action and another passive action, which always leads to a safe constant utility, the decision maker may purchase information from an information seller. The seller can access the realized state of nature, and this information is useful for the decision maker (i.e., the information buyer) to better estimate his payoff from the active action.We study the seller’s problem of designing a revenue-optimal pricing scheme to sell her information to the buyer. Suppose the buyer's private type and the state of nature are drawn from two independent distributions, we fully characterize the optimal pricing mechanism for the seller in closed form. Specifically, under a natural linearity assumption of the buyer payoff function, we show that an optimal pricing mechanism is the threshold mechanism which charges each buyer type some upfront payment and then reveals whether the realized state is above some threshold or below it. The payment and the threshold are generally different for different buyer types, and are carefully tailored to accommodate the different amount of risks each buyer type can take. The proof of our results relies on novel techniques and concepts, such as upper/lower virtual values and their mixtures, which may be of independent interest.

5 mins Q&A

10:20-10:40    EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback

Prof. Peter Richtárik, Professor, Computer Science, KAUST

Abstract: First proposed by Seide et al (2014) as a heuristic, error feedback (EF) is a very popular mechanism for enforcing convergence of distributed gradient-based optimization methods enhanced with communication compression strategies based on the application of contractive compression operators. However, existing theory of EF relies on very strong assumptions (e.g., bounded gradients), and provides pessimistic convergence rates (e.g., while the best known rate for EF in the smooth nonconvex regime, and when full gradients are compressed, is O(1/T^{2/3}), the rate of gradient descent in the same regime is O(1/T). Recently, Richt\'{a}rik et al (2021) proposed a new error feedback mechanism, EF21, based on the construction of a Markov compressor induced by a contractive compressor. EF21 removes the aforementioned theoretical deficiencies of EF and at the same time works better in practice. In this work we propose six practical extensions of EF21: partial participation, stochastic approximation, variance reduction, proximal setting, momentum and bidirectional compression. Our extensions are supported by strong convergence theory in the smooth nonconvex and also Polyak-Łojasiewicz regimes. Several of these techniques were never analyzed in conjunction with EF before, and in cases where they were (e.g., bidirectional compression), our rates are vastly superior.

5 mins Q&A

10:45-11:00    Tea Break

11:00-12:30     Wrap Up Panel (Closed-door)

Co-Moderators

Prof. Ji-Rong Wen, Executive Dean, GSAI

Prof. Xin Gao, Professor, Computer Science, KAUST


List of Participants

Speakers from Gaoling School of Artificial Intelligence, Renmin University, China

See https://gsai.ruc.edu.cn/addons/teacher/enhome.html

Speakers fromKing Abdullah University of Science and Technology, Saudi Arabia

(The CVs are organized in the speech order of the meeting)

Jürgen Schmidhuber Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. His lab's Deep Learning Neural Networks based on ideas published in the "Annus Mirabilis" 1990-1991 have revolutionised machine learning and AI. By the mid 2010s, they were on 3 billion devices, and used billions of times per day through users of the world's most valuable public companies, e.g., for greatly improved (CTC-LSTM-based) speech recognition on all Android phones, greatly improved machine translation through Google Translate and Facebook (over 4 billion LSTM-based translations per day), Apple's Siri and Quicktype on all iPhones, the answers of Amazon's Alexa, and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets, with superhuman performance. In 2012, they had the first deep NN to win a medical imaging contest (on cancer detection). All of this attracted enormous interest from industry. His research group also established the fields of mathematically rigorous universal AI and recursive self-improvement in metalearning machines that learn to learn (since 1987). In 1990, he introduced unsupervised adversarial neural networks that fight each other in a minimax game to achieve artificial curiosity (GANs are a special case). In 1991, he introduced very deep learning through unsupervised pre-training, and neural fast weight programmers formally equivalent to what's now called linear Transformers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. He is recipient of numerous awards, author of over 350 peer-reviewed papers, and Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI. He is a frequent keynote speaker, and advising various governments on AI strategies.

Bernard Ghanem is currently an Associate Professor in the CEMSE division, Deputy Director of the AI Initiative, and a theme leader at the Visual Computing Center (VCC) at KAUST. His research interests lie in computer vision and machine learning with emphasis on topics in video understanding, 3D recognition, and foundations of deep learning. He received his Bachelor’s degree from the American University of Beirut (AUB) in 2005 and his MS/PhD from the University of Illinois at Urbana-Champaign (UIUC) in 2010. His work has received several awards and honors, including six Best Paper Awards for workshops in CVPR 2013&2019&2021, ECCV 2018&2020, and ICCV 2021, a two-year KAUST Seed Fund, a Google Faculty Research Award in 2015 (1st in MENA for Machine Perception), and a Abdul Hameed Shoman Arab Researchers Award for Big Data and Machine Learning in 2020. He has co-authored more than 150 peer reviewed conference and journal papers in his field as well as two issued patents. He serves as Associate Editor for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and has served several times as Area Chair (AC) for top-tier conferences in computer vision and machine learning including CVPR, ICCV, ICLR, and AAAI.

Visit ivul.kaust.edu.sa and www.bernardghanem.comfor more details.

Xin Gao is a professor of computer science in CEMSE Division at KAUST. He is also the Associate Director of the Computational Bioscience Research Center (CBRC), Deputy Director of the Smart Health Initiative (SHI), and the Lead of the Structural and Functional Bioinformatics (SFB) Group at KAUST. Prior to joining KAUST, he was a Lane Fellow at Lane Center for Computational Biology in School of Computer Science at Carnegie Mellon University. He earned his bachelor degree in Computer Science in 2004 from Tsinghua University and his Ph.D. degree in Computer Science in 2009 from University of Waterloo.

Dr. Gao’s research interest lies at the intersection between computer science and biology. In the field of computer science, he is interested in developing machine learning theories and methodologies related to deep learning, probabilistic graphical models, kernel methods and matrix factorization. In the field of bioinformatics, his group works on building computational models, developing machine learning techniques, and designing efficient and effective algorithms to tackle key open problems along the path from biological sequence analysis, to 3D structure determination, to function annotation, to understanding and controlling molecular behaviors in complex biological networks, and, recently, to biomedicine and healthcare.

He has published more than 270 papers in the fields of bioinformatics and machine learning. He is the associate editor of Journal of Translational Medicine, Genomics, Proteomics & Bioinformatics, BMC Bioinformatics, Journal of Bioinformatics and Computational Biology, and Quantitative Biology, and the guest editor-in-chief of IEEE/ACM Transactions on Computational Biology and Bioinformatics, Methods, and Frontiers in Molecular Bioscience.

Panagiotis Kalnis is a Professor at the King Abdullah University of Science and Technology (KAUST, http://www.kaust.edu) and served as Chair of the Computer Science program from 2014 to 2018. In 2009 he was a visiting assistant professor at Stanford University. Before that, he was an assistant professor at the National University of Singapore (NUS). In the past he was involved in the designing and testing of VLSI chips and worked in several companies on database designing, e-commerce projects and web applications. He has served as associate editor for the IEEE Transactions on Knowledge and Data Engineering (TKDE) from 2013 to 2015, and on the editorial board of the VLDB Journal from 2013 to 2017. He received his Diploma from the Computer Engineering and Informatics Department, University of Patras, Greece in 1998 and his PhD from the Computer Science Department, Hong Kong University of Science and Technology (HKUST) in 2002. His research interests include Big Data, Parallel and Distributed Systems, Large Graphs and Systems for Machine Learning.

https://scholar.google.com/citations?user=""-NdSrrYAAAAJ

Ying Wu  is an associate professor in Applied Mathematics and Computational Sciences with secondary affiliations with the Electrical and Computer Engineering and Applied Physics programs. She received her BSc from Nanjing University in 2002 and PhD from the Hong Kong University of Science and Technology (HKUST) in 2008. Her research focuses on the development of innovative models and computational tools to describe wave propagation in complex systems. She serves as a co-editor for EPL and an associate editor for Wave Motion. She was awarded the Young Investigator Award by the International Phononics Society in 2017.

Peter Richtárik is a professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia, where he leads the Optimization and Machine Learning Lab. At KAUST, he has a courtesy affiliation with the Applied Mathematics and Computational Sciences program and the Statistics program, and is a member of the Visual Computing Center, and the Extreme Computing Research Center. Prof Richtarik is a founding member and a Fellow of the Alan Turing Institute (UK National Institute for Data Science and Artificial Intelligence), and an EPSRC Fellow in Mathematical Sciences. During 2017-2019, he was a Visiting Professor at the Moscow Institute of Physics and Technology. Prior to joining KAUST, he was an Associate Professor of Mathematics at the University of Edinburgh, and held postdoctoral and visiting positions at Université Catholique de Louvain, Belgium, and University of California, Berkeley, USA, respectively. He received his PhD in 2007 from Cornell University, USA.

Prof Richtarik’s research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, and high-performance computing. Through his work on randomized and distributed optimization algorithms, he has contributed to the foundations of machine learning, optimization and randomized numerical linear algebra. He is one of the original developers of Federated Learning – a new subfield of artificial intelligence whose goal is to train machine learning models over private data stored across a large number of heterogeneous devices, such as mobile phones or hospitals, in an efficient manner, and without compromising user privacy. In an October 2020 Forbes article, and alongside self-supervised learning and transformers, Federated Learning was listed as one of three emerging areas that will shape the next generation of Artificial Intelligence technologies.

Prof Richtárik’s works attracted international awards, including a Best Paper Award at the NeurIPS 2020 Workshop on Scalability, Privacy, and Security in Federated Learning (joint with S. Horvath), Distinguished Speaker Award at the 2019 International Conference on Continuous Optimization, SIAM SIGEST Best Paper Award (joint with O. Fercoq), and the IMA Leslie Fox Prize (second prize, three times, awarded to two of his students and a postdoc). Several of his works are among the most read papers published by the SIAM Journal on Optimization and the SIAM Journal on Matrix Analysis and Applications. Prof Richtarik serves as an Area Chair for leading machine learning conferences, including NeurIPS, ICML and ICLR, and is an Area Editor of Journal of Optimization Theory and Applications, Associate Editor of Optimization Methods and Software, and a Handling Editor of the Journal of Nonsmooth Analysis and Optimization.

Top