Centres Of Excellence

To focus on new and emerging areas of research and education, Centres of Excellence have been established within the Institute. These ‘virtual' centres draw on resources from its stakeholders, and interact with them to enhance core competencies

Read More >>

Faculty

Faculty members at IIMB generate knowledge through cutting-edge research in all functional areas of management that would benefit public and private sector companies, and government and society in general.

Read More >>

IIMB Management Review

Journal of Indian Institute of Management Bangalore

IIM Bangalore offers Degree-Granting Programmes, a Diploma Programme, Certificate Programmes and Executive Education Programmes and specialised courses in areas such as entrepreneurship and public policy.

Read More >>

About IIMB

The Indian Institute of Management Bangalore (IIMB) believes in building leaders through holistic, transformative and innovative education

Read More >>

Decision Sciences Area to host seminar on ‘Trustworthy AI – Is it just a buzzword?’ on 18 July

The talk will be delivered by Dr. Rishika Sen, Data Scientist at Ericsson

8 July, 2024, Bengaluru:  The Decision Sciences Area at IIM Bangalore will host a research seminar on, ‘Trustworthy AI – Is it just a buzzword’, to be led by Dr. Rishika Sen, Data Scientist at Ericsson, from 3-4 pm, on 18 July 2024, at Q-001.

The automated analysis of the trustworthiness of AI systems is still being researched. The upcoming seminar will provide an in-depth examination of the ‘what, ‘why’ and ’how’ of Trustworthy AI, and the challenges of its automation. Furthermore, the session will delve into the current and future implications of Trustworthy AI, and its prospects within the industry.  

Click here to register for the event: https://forms.office.com/

DS area

Abstract: Trustworthy AI, is it just a buzz word? Are we just riding the wave of a new technology? Or does it have more to it? Trustworthy AI refers to artificial intelligence systems that are ethical, transparent, and reliable. Trustworthy AI is based on seven principles - transparency, diversity, robustness, privacy, accountability, human oversight, societal/environmental wellbeing.

Transparency in AI systems is usually achieved via Explainability. Various explainability algorithms are broadly classified into model agnostic (SHAP, LIME, etc) and model-specific (Integrated gradient, DeepLIFT, etc). To ensure diversity, non-discrimination and fairness, bias free models are needed. Bias mitigation techniques include threshold adjustment, reweighing, model and data monitoring to name a few. Data privacy and security is another crucial aspect of Trustworthy AI, requiring robust encryption and secure data handling practices to protect user information. Hash functions, AES, RSA are a few techniques to ensure data remains private.

Robustness ensures that AI systems can handle a wide range of inputs and remain functional under adversarial conditions, often achieved through rigorous testing and validation processes. Robustness techniques popularly used are ensemble models, model, data monitoring, cross-validation, data augmentation to name a few. Accountability mechanisms must be integrated as well. Maintaining detailed logs, documentations and approval signatures are some ways to implement accountability. Human agency and oversight approaches can enhance trustworthiness by allowing human oversight and intervention in critical decision-making  processes. Human-in-the-loop is the technique mostly used to ensure the model functions as expected. In this technique, insights from SMEs are incorporated into the development and monitoring phase of the AI model.

Lastly, societal and environmental wellbeing needs to be taken into account. To ensure environmental wellbeing, monitoring of energy usage by an AI system is critical. Monitoring the data to align with societal values and human rights is also needed. Finally, continuous improvement practices, such as regular updates on data and model performance and incorporating automated feedback, are essential to adapt AI systems to evolving ethical standards and technological advancements.

Speaker profile: Rishika Sen is a Data Scientist at Ericsson. She obtained her PhD in Computer Science from Indian Statistical Institute, Kolkata, in application and development of AI/ML algorithms in Bioinformatics. She obtained her Masters and Bachelors degree in Computer Science from University of Calcutta. Her domain includes-Trustworthy AI, Generative AI, Explainable AI.

Decision Sciences Area to host seminar on ‘Trustworthy AI – Is it just a buzzword?’ on 18 July

The talk will be delivered by Dr. Rishika Sen, Data Scientist at Ericsson

8 July, 2024, Bengaluru:  The Decision Sciences Area at IIM Bangalore will host a research seminar on, ‘Trustworthy AI – Is it just a buzzword’, to be led by Dr. Rishika Sen, Data Scientist at Ericsson, from 3-4 pm, on 18 July 2024, at Q-001.

The automated analysis of the trustworthiness of AI systems is still being researched. The upcoming seminar will provide an in-depth examination of the ‘what, ‘why’ and ’how’ of Trustworthy AI, and the challenges of its automation. Furthermore, the session will delve into the current and future implications of Trustworthy AI, and its prospects within the industry.  

Click here to register for the event: https://forms.office.com/

DS area

Abstract: Trustworthy AI, is it just a buzz word? Are we just riding the wave of a new technology? Or does it have more to it? Trustworthy AI refers to artificial intelligence systems that are ethical, transparent, and reliable. Trustworthy AI is based on seven principles - transparency, diversity, robustness, privacy, accountability, human oversight, societal/environmental wellbeing.

Transparency in AI systems is usually achieved via Explainability. Various explainability algorithms are broadly classified into model agnostic (SHAP, LIME, etc) and model-specific (Integrated gradient, DeepLIFT, etc). To ensure diversity, non-discrimination and fairness, bias free models are needed. Bias mitigation techniques include threshold adjustment, reweighing, model and data monitoring to name a few. Data privacy and security is another crucial aspect of Trustworthy AI, requiring robust encryption and secure data handling practices to protect user information. Hash functions, AES, RSA are a few techniques to ensure data remains private.

Robustness ensures that AI systems can handle a wide range of inputs and remain functional under adversarial conditions, often achieved through rigorous testing and validation processes. Robustness techniques popularly used are ensemble models, model, data monitoring, cross-validation, data augmentation to name a few. Accountability mechanisms must be integrated as well. Maintaining detailed logs, documentations and approval signatures are some ways to implement accountability. Human agency and oversight approaches can enhance trustworthiness by allowing human oversight and intervention in critical decision-making  processes. Human-in-the-loop is the technique mostly used to ensure the model functions as expected. In this technique, insights from SMEs are incorporated into the development and monitoring phase of the AI model.

Lastly, societal and environmental wellbeing needs to be taken into account. To ensure environmental wellbeing, monitoring of energy usage by an AI system is critical. Monitoring the data to align with societal values and human rights is also needed. Finally, continuous improvement practices, such as regular updates on data and model performance and incorporating automated feedback, are essential to adapt AI systems to evolving ethical standards and technological advancements.

Speaker profile: Rishika Sen is a Data Scientist at Ericsson. She obtained her PhD in Computer Science from Indian Statistical Institute, Kolkata, in application and development of AI/ML algorithms in Bioinformatics. She obtained her Masters and Bachelors degree in Computer Science from University of Calcutta. Her domain includes-Trustworthy AI, Generative AI, Explainable AI.