Commentary: A Declaration Of Independence For Nonprofit AI

Artificial intelligence now dominates nonprofit leadership conversations. The focus is immediate and tactical. How quickly are you adopting it? What is your roadmap? What are you automating first?

As chair of the Fundraising Effectiveness Project, I ask a more fundamental question. Why?

The Fundraising Effectiveness Project (FEP) commissioned a survey this past summer, led by Meena Das of Namaste Data in partnership with the Association of Fundraising Professionals (AFP) Foundation for Philanthropy and GivingTuesday. The goal was to better understand how fundraisers are using Fundraising Effectiveness Project data.

Of the respondents, 86% reported using FEP data to educate staff and leadership while 62% said they are adjusting organizational strategy based on what the data reveals.

And yet, only 29% responded that they are applying the data to campaign planning. The data is shaping conversation at the top, but it is not consistently embedded in day-to-day fundraising execution. The gap between insight and practice remains significant.

That gap is where artificial intelligence is now being positioned as the solution. If the challenge is translating data into daily decisions, AI promises speed and scale. We have seen this pattern before.

Technology advances quickly, and governance trails behind.

When asking “why,” the inquiry is about something different: Why would we adopt another powerful technology without first deciding who governs it? Are we ready to declare our independence before that pattern repeats itself again?

 The Sector’s Structural Reality

Nearly all (97%) of nonprofits in the United States operate on less than $5 million in annual revenue and 92% operate on less than $1 million. The organizations holding communities together are small, undercapitalized, and navigating rising technology costs alongside persistent staff turnover.

Burnout is a defining condition of the current moment. The Center for Effective Philanthropy’s State of Nonprofits 2025 report found that nearly 90% of nonprofit leaders are concerned about staff burnout and its impact on mission delivery. Demand continues to increase while staffing capacity remains constrained.

Institutional knowledge erodes when turnover becomes routine. Constituent Relationship Management (CRM) systems shift from strategic assets to static repositories. Data literacy resets with each new hire. Tools are layered on before existing systems are fully integrated.

Artificial intelligence is entering that environment. Its capabilities are real. Its limits are real as well.

Without stable teams and sustained investment in people, even the most sophisticated tools struggle to create durable change. Artificial intelligence cannot compensate for weak infrastructure.

What AI Can And Cannot Do

We already understand the fundamentals of generosity. Decades of philanthropic psychology research, including the work of Jen Shang, Ph.D., and Adrian Sargeant, Ph.D., both from the Institute for Sustainable Philanthropy, show that donors give when generosity affirms their identity, reinforces their values, and strengthens belonging.

Technology can help analyze communication patterns, surface inconsistencies between values and messaging tone, and identify retention trends across campaigns. It can assist leaders in scenario testing and summarizing large datasets.

It cannot build trust. It cannot repair burnout. It cannot substitute for collective judgment.

The current wave of AI investment is being fueled by extraordinary capital flows seeking outsized returns. Billions of dollars are being invested in models, data centers, and infrastructure with expectations of scale and market dominance. Those incentives are not inherently aligned with strengthening civic infrastructure.

When capital drives innovation at this pace, consolidation often follows. The early internet era offers precedent. Initial promise gave way to concentrated platforms and extractive business models before governance frameworks caught up.

Nonprofits operate within that larger economic system. They are not insulated from it. If the sector does not proactively define how AI should serve mission, it will inherit tools optimized for someone else’s priorities.

The Case For Shared Civic Infrastructure

The nonprofit sector is not fragmented. It is interconnected.

AFP chapters convene thousands of fundraisers each month. Leadership development networks such as the Nonprofit Leadership Alliance in Kansas City, Missouri and the Association of Leadership Programs in Seattle, Washington strengthen professional pipelines. Community-rooted initiatives like Somos El Poder in Alameda, California align identity and leadership around community voice.

National networks such as the National Council of Nonprofits in Washington, D.C., under Diane Yentel’s leadership, translate coordinated learning into federal advocacy. The Community-Centric Fundraising movement in Union, Washington, giving circles, and mutual aid networks demonstrate how distributed governance can function at scale.

Fundraisers already move between these spaces. Agility is baked into the system.

What is missing is shared infrastructure for sense-making at the speed technology now demands. Without that connective layer, each network interprets change in isolation. Vendors conduct market research. Companies build what they assume the sector needs. Collective norms form after adoption rather than before it.

Initiatives such as the FEP show another path. When fundraisers contribute anonymized data, interpret it together, and align around shared benchmarks, the sector connects the dots faster than external actors can study it.

The question is whether we are willing to extend that connective infrastructure into how artificial intelligence is understood and applied.

This does not require inventing new institutions. It requires resourcing the ones we already trust.

Underwriting recurring peer interpretation sessions within chapters costs less than a single conference dinner. Supporting regional collaboratives that examine AI tools through shared data and ethical frameworks is not speculative. These are practical investments in governance.

Why Independent Data Collaboratives Matter

There is already extraordinary generosity and expertise inside the nonprofit technology ecosystem. Building and maintaining secure data warehouses, employing skilled data scientists, and ensuring compliance and reliability require real investment. GivingTuesday’s Data Commons is a powerful example of purpose-built infrastructure designed to serve civic goals, even when it relies on broader commercial cloud infrastructure for underlying capacity.

This is not a question of whether the sector can build serious technology. It can, and it does.

The distinction that matters is functional. The data layer and the interpretation layer serve different purposes. Maintaining datasets, safeguarding privacy, and producing rigorous benchmarks require technical excellence. Helping practitioners interpret what those benchmarks mean for strategy, ethics, and day-to-day fundraising practice requires trusted peer spaces and governance structures grounded in professional norms.

When each part of the ecosystem is resourced to do its best work, balance becomes possible. Purpose-built data infrastructure can remain secure and independent. Philanthropy can strengthen the interpretive spaces where practitioners deliberate, test assumptions, and align around shared standards.

That is when we expand impact, not by assuming primacy of any single actor, but by ensuring that data stewardship and professional governance reinforce one another.

Fund the data scientists. Fund the servers. Fund the peer governance spaces that keep both layers accountable to mission rather than to any single set of incentives.

The Declaration

Artificial Intelligence will shape the nonprofit sector. That is not a question. The question is who sets the terms.

A declaration of independence for nonprofit AI does not reject technology. It rejects dependency. It insists that governance remain in the hands of practitioners, that shared data be interpreted in independent spaces, and that infrastructure be funded in ways that prioritize civic health over market consolidation.

If that declaration is to mean anything, it requires three commitments.

  • Control Over Adoption. Fundraisers must retain the authority to decide when and how AI tools are used. Adoption should follow clearly defined mission needs and ethical standards, not vendor timelines or competitive pressure.
  • Investment In Capacity. Trusted analysis requires funding for data scientists, secure systems, and the time necessary to produce work that is rigorous and transparent. AI literacy must be embedded in existing peer networks, so professionals understand the tools shaping their decisions.
  • Shared Governance. Decisions about AI should emerge from collective interpretation of data and practice. Peer-led forums, chapter-based collaboratives, and cross-network dialogue are not side conversations. They are how the profession defines its norms.

The nonprofit sector employs roughly one in 10 workers in the United States. It anchors civic life. It carries the moral imagination of our communities. If we want AI to strengthen generosity rather than distort it, we must invest in the human systems that govern it.

The first question was asking why. Now the sector must ask: Can we claim independence if our autonomous agents are designed to serve someone else’s priorities?

*****

Tim Sarrantonio is chair of the Fundraising Effectiveness Project, a national benchmarking initiative overseen by the Association of Fundraising Professionals Foundation and GivingTuesday. He has spent more than 15 years working at the intersection of generosity, data, and nonprofit practice, supporting organizations as they navigate strategy, technology, and sector-wide collaboration. He is the founder of The Generosity Spectrum.