(image from pexels.com)
By Scott Beale, CC
Artificial intelligence (AI) has gone from nowhere to everywhere in just about three years. If you are like most nonprofit leaders, you are already using AI, but you still have questions about the cybersecurity considerations and risks for your organization.
Today, 92% of nonprofits have adopted AI-enabled tools, as noted in “The 2026 Nonprofit AI Adoption Report,”(https://bit.ly/4rf232J) but in the same report nearly half (47%) indicate having no AI governance policy.
AI adoption is a great way to engage younger staff and to bring in volunteers and board members with different perspectives. It is likewise critical that nonprofit leaders get up to speed on both the benefits and risks, as well as on how generative AI and AI agents are changing the game. Don’t make the mistake of missing out on the efficiencies these tools can deliver. And, don’t make the mistake of ignoring the risks they introduce.
This is where leadership matters. AI strategy is not about technical skills or calculations. It is a leadership decision that shapes how your organization balances productivity, risk and mission impact. The question for nonprofit leaders is not simply whether to adopt AI, but how to do so responsibly while strengthening the people and communities your organization serves. Will you use fear, tight budgets or ignorance to paralyze you and your organization from making critical AI planning and security decisions, or will you embrace AI, mitigate the risks and optimize AI in a way that is consistent with your mission?
Many nonprofit leaders have wrestled with these same questions. Whether your organization is adopting generative AI, a traditional machine learning model, a chatbot or any other type of AI, five key considerations should guide your AI strategy and decision-making as a nonprofit leader.
- Set Up Clear, Objective, Responsible Guardrails And Governance
It is compelling to rush full speed ahead into adopting AI tools for efficiency and productivity. But adopting AI tools should be an organizational strategy set forth by you and other executive leaders, as well as your board. These are common questions to ask before diving in. It’s perfectly acceptable as a leader to enlist outside advisors if these questions are new to you and your team:
- What specifically do you intend to accomplish by introducing AI into your operation? Do those objectives align with your mission, culture, stakeholders and more?
- Who is driving AI adoption? IT? Is it decentralized department-by-department? And are security and privacy considered from the start?
- What organizational and external donor, staff, member and/or stakeholder data would be involved?
- Can your data strategy assure that your organization can comply with data privacy regulations?
- How do you assess and categorize AI use cases based on risk?
- What is the process for monitoring safe and secure deployment and use?
- How will you train employees on responsible and secure AI use?
As a nonprofit leader, you can strategically moderate AI adoption to answer these questions thoroughly. Again, this first step is a critical one for changing the perspective and mindset of your organization regarding the value of AI. It is worth the time and energy upfront to map out:
- AI governance: Roles, responsibilities and accountability owners, risk management processes, data security strategy, employee compliance and more.
- AI strategy: Your organization’s specific use cases (e.g., donation predictions, volunteer communications, market and trend analysis, data security), intended outcomes, deployment processes, timeline and measures of success.
- AI expertise: Paths, plans and opportunities for becoming well-versed in AI strategy, risk and security as executive leaders, and even members of the Board.
- AI adoption: The framing of adoption in terms of the mission and, in turn, enabling staff so they understand how AI makes them more efficient and empowers them to be better at the organization’s mission, rather than just using a task-oriented tool.
- Start With Lower-Risk Use Cases That Don’t Involve Sensitive Data
Although it might be tempting to unleash AI tools across your organization, doing so only widens the potential attack surface. Great leadership is about looking at all facets of any strategy involving something new. It goes without saying that you don’t want to race out of the AI gate with fundraising optimization or donor prediction tools that rely on your protected data to fine-tune them.
Instead, focus on lower-risk use cases to test out your organization’s internal security guardrails, policies and governance. These applications might include writing marketing emails at scale, drafting basic operational processes, streamlining project management or brainstorming creative ways to boost member engagement.
There are security and data privacy protocols, of course, with any use case. It is guaranteed that any AI that’s free is using the data inputs to train its underlying model, making your data available outside your organization. That openness is simply too risky when it includes sensitive data. Therefore, it is worth budgeting for your organization’s own, locked-down generative AI tool to ensure your data stays within the secured parameters of your organization.
To add another layer of data security, make sure every employee is continually well-trained on which data is okay to use with these tools and, by contrast, which information should never be used under any circumstances. Keep in mind that the basic tenets of your security practice do not change with AI: How you secure your data; Who has access to it; What is acceptable use? All of that remains the same.
- Scrutinize The Data Supply Chain When Testing Third-Party Software Vendors
Weighing cybersecurity and risk of AI within your organization is one thing; considering your third-party software suppliers is another. Does your donor management platform come with an embedded AI tool? What about your other donor engagement tools? If so, what are you doing to assess the security of your third-party software supply chain?
Thomas Lee, Ph.D., CEO of VivoSecurity, who specializes in third-party risk management, has noted (https://bit.ly/4bx2918) that data breaches constitute a “data management problem.” As a leader in the AI age, you must sharpen your organization’s data strategy. Do you know where your data is going and who has access to it when you use AI-enabled tools?
According to Dr. Lee’s research, third-party data breaches account for more than 50% of large data breaches. Third-party supply chain security and risk management are just as crucial as getting your in-house governance in order.
It’s no wonder, then, that 70% of respondents from organizations of all shapes, sizes and purposes said they are highly concerned about supply chain risk, according to the ISC2 Supply Chain Risk Survey. (https://bit.ly/4sAO2Oa). Accordingly, vetting the security of your third-party software vendors is crucial. While the sway of boosting productivity and impact is appealing, mitigating third-party risk should drive your AI strategy. Whether using your in-house security expert or a managed security service provider, make sure they are involved upfront in your AI strategy and the software review process.
In any case, encourage leaders to challenge their vendors on how their AI works, and find out whether it’s optional to turn off their embedded AI. How transparent are they about how your data is used? Can you lock down only your instance/tenant of their solution?
- Understand The Latest AI-Focused Cyber Threats
AI is redefining cybercrime, enabling bad actors to scale the speed, impact and reach of cyberattacks. Both technical and human-based AI attacks are taking center stage, so it’s imperative to understand how bad actors are using AI. The ISC2 Cybersecurity Workforce Study (https://bit.ly/4mjNxpS), which included cybersecurity professionals from a wide range of industries and organization sizes, including the nonprofit sector, found that 40% of respondents experienced AI-optimized social engineering attacks, 25% reported data leakage, 23% saw suspected AI-powered cyberattacks and another 23% experienced
AI-related data breaches. These types of attacks are also happening to your donors. Make sure they know how you will reach out to them so they don’t fall victim to any scams using your name and likeness.
- Prioritize People Over Machines To Mitigate AI Risk
Finally, a well-oiled AI strategy should prioritize people over productivity, empowering them to stay curious, connected and impactful. Even if you’ve established the most secure and responsible AI strategy, the bottom line is this: You’re still dealing with a machine. So, an underlying consideration for securing AI in nonprofits, and mitigating risk, is what and who you are protecting.
To lead a purpose-driven organization, you must consider the technological and security impacts of AI on those you hope to inspire, including employees and the communities you serve. These are the intangible aspects of AI risk. Will AI enhance or degrade their efforts or experiences? Will AI take away from the authentic, human element that’s so ingrained in elevating the mission of nonprofits? Will your organization’s use of AI tools add to the potential climate impacts of AI? Will AI replace any employees? Will it eliminate the great nuances that characterize social-driven organizations? And so on.
Purpose-Driven AI Adoption
As AI drives a fundamental shift in how every organization operates and scales in the years ahead, this moment represents far more than adopting a new technology. The path forward is about channeling your leadership influence to slow down before hurtling full speed ahead with AI to scrutinize how to adopt AI responsibly and securely — and with minimal risk.
The value and impact of AI in nonprofits must be shaped not by machines themselves but by the leaders who build an AI strategy with intention, purpose and positive impact for the people, both employees and stakeholders, who can carry forward your organization’s unique mission.
*****
Scott Beale, CC, is chief executive officer of cybersecurity nonprofit ISC2 https://www.isc2.org





