Ten philanthropies will collectively contribute more than $200 million to a grantmaking initiative that will promote responsible deployment and use of artificial intelligence (AI). The effort, which is being touted by Vice President Kamala Harris, is part of a larger-scale focus by the Biden Administration on AI.
According to a statement from the Administration, “When developed, deployed, and used responsibly, AI technologies can help address pressing challenges in health, climate, education, and other issues. But AI systems are also creating significant and tangible harms – often with a disproportionate impact on marginalized communities – and pose serious threats to civil rights, human rights, worker rights, and national and international security. Moreover, while developments in AI have the potential to contribute to economic prosperity, sustained, broadly shared progress requires engaged action from communities, workers, government, and the public.”
Initial focuses of the funding will include:
1. Ensuring that AI protects democracy and the rights and freedoms of all people, including protecting U.S. democracy from efforts that use AI to undermine elections by combatting disinformation and the undermining of public trust. Funded projects will also develop inclusive, rights-respecting
2. AI governance frameworks that safeguard historically marginalized communities.Leveraging AI to innovate in the public interest and deliver breakthroughs to improve quality of life for people around the world. Funded projects would include outreach to policymakers regarding the nature, use and technology of AI, redefining computer science education, research and technology to incorporate needs, problems and aspirations into the development of each of these; and furthering other ethical and responsibility considerations.
3. Empowering workers to thrive amid AI-driven changes across sectors and industries. This includes investing in programs that empower workers to shape how AI affects their own work as well as current and emerging industries and global economies. Funds will be steered toward efforts that ensure AI systems respect worker rights and include worker perspectives on AI use and its impact on working conditions, worker autonomy, and labor conditions.
4. Improving transparency, interpretability, and accountability for AI models, companies, and deployers. Through this, philanthropies will support initiatives that make AI companies accountable for racial, social and economic bias within their offerings and operations. Separate endeavors will focus on AI-driven harms, and will advance research that investigates power disparities and monopolies within the tech industry, among other concerns.
5. Supporting the development of responsible international AI rules and norms. Funded projects include development of policy frameworks, research that illuminates impacts of discrimination and bias, and advocacy efforts to ensure civil society has a seat at the table as international rules are developed.
Participating philanthropies include: The David and Lucile Packard Foundation, Democracy Fund, The Ford Foundation, Heising-Simons Foundation, The John D. and Catherine T. MacArthur Foundation, Kapor Foundation, Mozilla Foundation, Omidyar Network, Open Society Foundations and the Wallace Global Fund
Details regarding how the funds would be distributed were not available at deadline.
The philanthropies’ efforts are part of a wider-scale focus on AI by the Biden Administration. On October 30, President Joseph Biden signed an executive order that established security, privacy, protection and innovation protocols for AI. The executive order required developers of the top AI systems to share their safety test results and other information with the U.S. government; develop means to ensure AI systems are safe, secure and trustworthy; protect against AIs being used to develop dangerous biological materials; protect against AI-enabled fraud, including establishing standards for detecting AI-generated content and authenticating official content; establishing cybersecurity programs that develop AI tools for detecting and fixing software vulnerabilities; and ordering development of a security memorandum that directs additional actions regarding AI and security.
Vice President Harris earlier this year led a group of CEOs from 15 AI companies in securing voluntary commitments for safety, security and trustworthiness of AI systems and separately convened a discussion among civil rights, consumer protection and labor representatives around risks related to AI, and idea that innovation can be advanced while simultaneously protecting consumers’ rights.








