Artificial Intelligence, inclusive education and employment: opportunities and challenges



Artificial Intelligence, inclusive education and employment: opportunities and challenges

At the beginning of October 2023, the European Disability Forum (EDF) organised a workshop on “Artificial Intelligence, Inclusive Education and Employment” gathering 80 participants, experts and EDF members, to discuss the impact of Artificial Intelligence (AI) on persons with disabilities, particularly in education and employment. The workshop aimed to raise awareness and understanding of the opportunities and challenges that AI brings to this group and to find ways to ensure that AI is used to promote inclusion rather than reinforce existing inequalities.

The increasing role of technology in our daily lives

Humberto Insolera, EDF Executive Committee member, opened the event by highlighting the increasing role of technology in our daily lives and various areas such as healthcare, education and transport. Insolera discussed the importance of human supervision of AI to prevent it from coming to wrong conclusions or performing undesirable actions, as AI does not have a moral compass like humans. He elaborated on EDF’s involvement in AI, emphasising AI’s potential to improve the lives of individuals with disabilities and promote greater independence. He gave examples of AI applications developed for persons with disabilities, such as an app that provides audio descriptions for people with visual impairments and a virtual assistant for people with Alzheimer. Insolera also mentioned the impact of AI on human rights, especially for persons with disabilities. He referred to Gerard Quinn, the United Nations Special Rapporteur, who has written a report on AI that highlights the importance of equal treatment, work, study and privacy for people with disabilities. In the report, Gerard Quinn states that persons with disabilities are often left behind in the digital divide and warns that if we are excluded from the AI revolution, we may never catch up. 

The importance of training AI

Kave Noori, AI Policy Officer at EDF, gave a crash course on AI, explaining that conventional computer programmes work based on predefined rules, while AI learns to perform certain actions through patterns and statistical correlations. He stressed the importance of training AI with different data to avoid discriminatory results. He then spoke about the potential of AI in education, which, he said, could be in the development of interactive learning materials, the use of speech recognition software and personalised teaching aids. Noori suggested that AI could provide solutions for people with different learning styles and abilities. However, he stressed the need for careful implementation and testing to ensure that systems are optimally designed and do not limit human rights or discriminate against minorities.  

Kave Noori, AI Policy Officer at EDF, explains how hate speech detection software works. The example shows different sentences with different facts about a neighbour. Each of the sentences has a score (positive or negative) based on how the AI evaluate that given fact.
Kave Noori, AI Policy Officer at EDF, explains how hate speech detection software works. The example shows different sentences with different facts about a neighbour. Each of the sentences has a score (positive or negative) based on how the AI evaluate that given fact.

AI as a tool for personalised and individual learning

Maud Stiernet, an independent researcher and expert on AI, accessibility and children’s rights, presented perspectives on how to use AI as a tool for personalised and individual learning. In her interviews with children, she has found children can perceive that video game creators do a better job of making the product accessible than the maker of the school platform that they use. Furthermore, her research shows that students prefer to ask a chatbot for help with their homework, even if they know the chatbot can sometimes give them the wrong answer. One thing that may be surprising to today’s adults, is the fact that many students also rely on the chatbot to socialise, which Maud says shows the importance of students getting social support in addition to the academic support they receive from teachers. When asked if she had any recommendations for the EDF in terms of policy, she recommended that we prioritise personalisation and address the ethical implications of AI in education, such as accessibility, privacy and so on.

The importance of diversity in the workplace

Jutta Treviranus from OCAD University in Toronto presented a slide show stating that 90% of organisations use some form of AI hiring tools according to the United States Equal Employment Opportunity Commission (EEOC).
Jutta Treviranus from OCAD University in Toronto presented a slide show stating that 90% of organisations use some form of AI hiring tools according to the United States Equal Employment Opportunity Commission (EEOC).

Jutta Treviranus, Director and Professor at the Inclusive Design Research Centre at Ontario College of Art and Design (OCAD) University in Toronto, emphasised the importance of diversity in the workplace for problem-solving and innovation. Treviranus also pointed out that 90% of companies in the United States use some form of AI hiring tools, which leads to the exclusion of persons with disabilities. She explained that these AI systems are trained based on historical data and repeat previous hiring patterns of the company. It means that people who have never been hired or are different from previous employees can be overlooked.


She continued explaining how disability is also particularly challenging for AI systems because they have difficulty understanding and recognising it. “A disability is a deviation from the average and involves a great deal of diversity and variability among individuals”. AI that relies on statistical analysis is extremely accurate for individuals within the “normal” distribution but becomes increasingly inaccurate the further one moves from the centre.


Current AI ethics audit tools cannot find biases against minorities, which leaves people with disabilities overlooked. Jutta stressed the importance of creating clear guidelines for fair AI system audits to tackle this problem.

Ensuring diversity and integration at all levels of AI development

Eduard Fosch-Villaronga, Professor and Research Director at eLaw – Centre for Law and Digital Technologies at Leiden University emphasised the need to consider diversity and integration at different levels of AI development. This includes ensuring diversity in algorithms, techniques and applications, as well as promoting diversity among the people in the AI development community. He also stressed the importance that AI developers consider the diversity of users who interact with and are affected by AI systems. For example, he noted that persons with disabilities are largely excluded in the research and development of affective (dealing with human emotions) computing systems, which is his area of research.

Fosch-Villaronga went on to discuss the challenges of using AI systems to recognise emotions in people with different physical, physiological and mental characteristics. As a researcher in his field, he noted that there is a lack of diversity and degree of inclusion of datasets commonly used in the field of affective computing. He said that in many datasets, different groups, such as different ethnicities, age groups, genders and health conditions, are not represented.

The implications of these biases in AI systems are far-reaching. They can contribute to exclusion, reinforce stereotypes, raise legal and ethical concerns, and lead to a loss of talent and innovation. He concluded by emphasising the need to integrate diversity and inclusion considerations into AI systems. Therefore, he stressed the importance of responsible use of AI and the important role stakeholder representatives, such as the European Disability Forum, play in providing feedback and helping developers ensure that AI systems are inclusive.