Explore, Learn Data & get inspired
for Trustworthy AI
🧑💻 Exploring ML, DL, and beyond
"Artificial Intelligence (AI) research, a dynamic and rapidly evolving subdiscipline within computer science, is dedicated to designing algorithms, developing methodologies, and creating intelligent systems capable of emulating sophisticated human cognitive functions. These advanced functions encompass areas such as speech recognition, image and pattern recognition, autonomous decision-making, complex problem-solving, and adaptive data-driven learning, including the emerging fields of generative AI and trustworthy AI.
In pursuit of these objectives, we have conducted extensive research using a broad spectrum of advanced technologies such as machine learning (ML), deep learning (DL), natural language processing (NLP), computer vision (CV), and reinforcement learning (RL). The outcomes and insights from our AI research can hold practical code examples and interactive demonstrations, designed to facilitate hands-on learning experiences. These resources support educational initiatives, helping students and professionals grasp and apply complex AI concepts. Moreover, our research insights can contribute to wider knowledge dissemination through comprehensive publications, systematically compiled and published in authoritative book formats. This commitment to knowledge sharing ensures that the latest advancements and innovations in AI are accessible not only to fellow researchers and practitioners but also to educators, developers, policy-makers, and enthusiasts across diverse sectors and disciplines."
Publication
H. Kim, Artificial Intelligence Leading Mobility Innovation: A Technological Journey Towards Future Society, Mobility Humanities Series. Seoul, South Korea: LPbook, Dec. 15, 2024. ISBN: 9791192647548. Available: https://product.kyobobook.co.kr/detail/S000214930063.
As AI becomes more ubiquitous in our daily lives, it is vital to develop AI systems that can co-exist harmoniously with humanity to the betterment of all.
👾Research on Explainable AI for the harmonious co-existence of humans and AI
"With AI growing increasingly sophisticated and embedded within critical decision-making processes, the necessity for transparency and reliability becomes more crucial than ever. Two concepts at the heart of addressing these needs are Explainable AI (XAI) and Trustworthy AI.
Explainable AI refers to methods and techniques aimed at making AI decisions understandable to humans. Unlike traditional "black-box" AI models, whose internal decision-making processes are obscure or entirely opaque, XAI enables stakeholders to comprehend how and why specific outcomes or predictions are reached. By providing clear explanations, XAI helps users verify, interpret, and trust AI systems, ultimately fostering wider acceptance and responsible use.
For instance, consider a medical diagnosis scenario where an AI model predicts the likelihood of a patient having a particular disease. A black-box model would merely provide a probability score without context, leaving doctors uncertain about how to interpret or trust the outcome. In contrast, an XAI approach would explain the prediction clearly, perhaps highlighting that the patient's age, family history, and recent symptoms were crucial factors influencing the diagnosis. Such transparency empowers medical professionals to confidently integrate AI-assisted insights into their decision-making processes.
Trustworthy AI expands beyond transparency, emphasizing reliability, fairness, accountability, privacy, and security within AI systems. It encompasses the ethical dimension of AI, ensuring systems operate within well-defined ethical guidelines, preventing biases and discrimination, and safeguarding sensitive information. Trustworthy AI ensures that AI systems are not merely accurate but are ethically aligned, fair, and beneficial for all stakeholders.
For example, an AI system used by a financial institution to assess loan applications must reliably demonstrate fairness and impartiality. A trustworthy AI system would actively avoid discriminatory practices, transparently showing how each applicant's creditworthiness is evaluated based solely on relevant, unbiased criteria. In addition, it would securely handle sensitive personal data, maintaining confidentiality and integrity.
At ELDiLAB, we recognize the importance of these concepts and intend to study XAI with a variety of approaches and introduce it through simple, relatable examples. Our goal is to bridge the gap between complex AI technologies and their practical implications. By demystifying AI and advocating for trustworthy systems, ELDiLAB contributes to responsible and beneficial AI development, paving the way toward a future where AI technologies enhance human capabilities ethically and transparently."