Sidebar

Projects

These are the current funded projects being carried out in the institute.

A Pre-trained Multimodal Commonsense Transformer for Human Omnitasking

Grant Amount: S$4.2 million 

Grant Period: 1 April 2024 – 31 March 2029

Funding Organization: AI Singapore

Often, humans have to conduct multiple tasks in a short period of time. For example, a tennis coach needs to instruct and monitor multiple students at different courts concurrently. Human’s productivity would improve a lot if we could develop an omnitasking system to assist the coach to perform tasks more efficiently. 

In this project, we aim to power human omnitasking by (1) creating a multimodal understanding and summarization system, (2) developing a multimodal commonsense Transformer that can conduct reasoning based on causality and can continuously learn, (3) adapting this Transformer to support various omnitasking tasks, and (4) developing an explainable and trustworthy AI-human collaboration framework via enhancing the above models with formal methods.

Principal Investigator

Co- Principal Investigators:

Real-Time Federated Learning on Data Streams

Grant Amount: S$1.3 million 

Funding Organization: Digital Trust Centre

This research targets the gap in building Federated Learning (FL) systems for real-time data streams, crucial in decision-making yet poorly served by existing FL models and systems designed for static datasets. It proposes developing advanced FL algorithms that can adapt to data dynamics, such as concept drifts and evolving feature spaces. 

This innovative approach not only addresses the challenges of dynamic data streams but also sets a new direction for FL research and application, emphasizing privacy-preserving models and real-time decision-making in mobility and safety.

Principal Investigator

Co- Principal Investigators:

Removing Unwanted Digital Footprints

Grant Amount: S$2.6 million 

Grant Period: 1 April 2024 – 31 March 2029

Funding Organization: AI Singapore

Traditionally, once an individuals data is given to organizations, these organizations are assumed to have permanent and unlimited (over time and number of usage) access to individuals’ data, resulting in privacy concerns over digital footprints. However, recent regulations like EU General Data Protection Regulation’s (GDPR) “right to be forgotten” challenges this assumption. 

Privacy regulations require that any individual should be able to request the timely erasure of his data and its impact on Machine Learning models. This project focuses on digital footprint removal and its consequences without affecting existing AI systems while providing ways to check for compliance.

Principal Investigator

Co- Principal Investigators:

Scalable Evaluation for Trustworthy Explainable AI

Grant Amount: S$2 million

Funding Organization: Digital Trust Centre

The ubiquity and impact of AI requires their responsible use. To this end, explanability is a core requirement to enable AI adoption by helping users to understand and trust AI decisions. However, although many explainable AI (XAI) techniques have been developed, few have been evaluated with actual human users.

This project proposes to develop an XAI Evaluation toolkit for automatic, scalable evaluation of XAI techniques under simulations. With this tool, we aim to enable the benchmarking of XAI techniques across diverse human interpretation tasks to support AI regulation and industry adoption.

Principal Investigator

Techniques for Investigation of Undocumented Machine Learning Model

Funding Organization: Home Team Science and Technology Agency

Recently, machine learning models have demonstrated excellent performance in many tasks. The development of easy-to-use training tools and widely available datasets has further accelerated their adoption. However, widespread adoption of such technologies raises concerns about potential misuse by malicious entities. Furthermore, given the complexities of current ML tools, the lack of documentation can obscure malicious intent within ML models. 

This project seeks to acquire elements of model transparency and interpretability through forensics analysis of ML models. The project intends to leverage on large pre-trained models to develop automated reverse engineering techniques such as model inversion, so as to gain insight of the model and uncover malicious intents.

Principal Investigator

Co- Principal Investigators: