AI, Islamophobia, and the Challenges Of Digital Literacy for Muslim Charities 

It is hardly a revelation that charities are under immense pressure due to rising demand and limited resources. Yet despite these challenges, AI presents an opportunity. If deployed responsibly, charities can alleviate some of the burdens they face through capacity-building and upskilling, adopting a digital-first approach to create a more efficient, effective, and impactful sector. Read More

The rapid evolution of Artificial Intelligence has transformed a generation, fundamentally reshaping how society functions. From drafting emails to writing complex code, AI has become embedded in day-to-day operations, driving efficiency and expanding capacity. Its growth has outpaced policymaking, and its adoption often feels less like a choice and more like a necessity. AI is no longer an emerging concept; it is part of the fabric of reality, woven into our daily lives and embedded in the terrain we work in. The question is no longer whether to engage, but how to do so on our own terms. 

Yet digital literacy remains a significant challenge within the third sector. The accelerating pace of AI risks widening this gap, pushing charities further behind as they struggle to adopt and adapt to emerging technologies that could otherwise enhance their impact and efficiency. At a time when demand for charitable services continues to grow, this disparity becomes even more urgent. Furthermore, as AI increasingly takes centre stage, the accompanying harms risk undermining the crucial work of charities by increasing the very inequalities the third sector strives to overcome. For Muslim charities and the wider third sector, the challenge is to engage early and wisely, to build the skills needed to use AI effectively, and the voice to shape its adoption in line with our values. 

A first step in adopting AI is to understand what AI actually is. The varying forms of AI often lead to misunderstanding and misinterpretation, which can result in a misconstrued approach, particularly around shaping policy and decision-making. A recent report by Friends of the Earth categorises AI into three broad types: predictive, generative, and artificial general intelligence (AGI). Predictive AI refers to systems that use statistical analysis and machine learning to analyse data, identify patterns, and anticipate future events. Generative AI can create “original” content, such as text, images, audio, video, or code in response to user prompts. AGI, however, refers to hypothetical intelligence that surpasses human capabilities, posing an existential threat. This form of AI is often dismissed as dystopian and largely irrelevant to current application. This broad understanding is critical to adapting our strategies, whether adoption or advocacy, to understand what AI is, how it works, and how we can use it responsibly. 

It is crucial, however, to ensure that conversations on AI adoption and implementation do not ignore the wider policy discussions surrounding it, particularly those relating to AI harm. Whilst civil society has long served vulnerable communities, addressing core needs such as food security, homelessness, and mental health, Muslim-led organisations face an added difficulty as they serve communities further marginalised by entrenched societal inequities. The dangers of AI are therefore even more pertinent. AI bias and Islamophobia have the potential to deepen these inequities, increasing disparities and undermining civil society’s efforts to bridge them. To some, the idea of a computer system reinforcing stereotypes and exhibiting bias may sound bizarre, but evidence has shown that bias can enter through data collection, algorithmic design, or human interpretation. The Dutch childcare benefit scandal serves as a sobering example. In 2019, Dutch authorities used a self-learning algorithm to generate risk profiles for identifying fraudulent child benefit claims. As a result, 26,000 applicants were wrongly accused of fraud and faced full repayment demands. In many cases, families had to repay tens of thousands of euros, driving them into severe financial hardship. The resulting revelations led to the resignation of the Dutch cabinet in 2021. These were not tech glitches, they were failures of transparency, accountability, and ethics, exemplifying the need to rise to the policy challenge posed by AI. 

It is hardly a revelation that charities are under immense pressure due to rising demand and limited resources. Yet despite these challenges, AI presents an opportunity. If deployed responsibly, charities can alleviate some of the burdens they face through capacity-building and upskilling, adopting a digital-first approach to create a more efficient, effective, and impactful sector. According to the Charity Digital Skills Report 2024, 50% of charities have a digital integration strategy in place. However, only 14% feel they have embedded digital within their organisation and are advanced in its use. This highlights a lack of digital confidence across the sector. While the adoption of AI could increase organisational efficiency, the reverse is also true: digital illiteracy could further compound existing challenges. Adopting AI is no longer optional, it is a necessity if the sector is to avoid being left behind. 

As AI becomes more embedded in our everyday lives, civil society cannot afford to stay on the sidelines. If we want AI to work for the public good, then civil society must be in the room. That starts with learning the basics, staying alert to harms, and building the skills needed to use AI in line with our values.

MCF invites Muslim civil society to participate in a survey which seeks to understand the sector’s preparedness for AI. This will, in turn, help shape future training, policy, and advocacy across the sector, ensuring MCF continues to support, represent, and connect Muslim charities. 

CLICK HERE TO COMPLETE THE SURVEY.