Explore, Learn Data & get inspired
for Trustworthy AI
AI(ML/DL) Research & Publication
"AI research, a subdiscipline of computer science, seeks to implement algorithms and systems that can emulate human cognitive abilities, such as speech recognition, decision-making, and data learning.
We've studied AI using a diverse array of tools and techniques - machine learning, deep learning, natural language processing, and computer vision - to forge intelligent systems that can adapt and refine themselves over time.
The fruits of our research may be utilized for AI education or disseminated in book form. "
"As AI becomes more ubiquitous in our daily lives, it is vital to develop AI systems that can co-exist harmoniously with humanity to the betterment of all."
Research on Explainable AI for the harmonious co-existence of humans and AI
"Explainable AI (XAI) is an emerging field of research that aims to create transparent and comprehensible AI systems. The goal of XAI is to develop AI systems that are able to safely, ethically, and reliably interact with humanity.
XAI research entails the development of algorithms and techniques that enable AI systems to explain their decisions and actions in a manner readily accessible to humans. These techniques may involve visualizations, natural language explanations, or other forms of communication that are intuitive and understandable.
One approach to XAI is to build models intrinsically interpretable, such that the inner workings of the model can be easily understood. This approach is particularly useful in applications where trust and transparency are critical, such as healthcare or finance.
Another approach is to employ post-hoc explanations, which provide explanations for AI system decisions after the fact. These explanations may take the form of visualizations, natural language descriptions, or other forms of communication.
ELDiLAB intends to study XAI with a variety of approaches and introduce it through simple examples."