Kristen Mack, Vice President, Communications, Fellows, and Partnerships, and Eric Sears, Director of Technology in the Public Interest, share how MacArthur’s values-based approach to AI centers people, both in our internal use and our grantmaking.

 

At the MacArthur Foundation, we aim to live our values in all aspects of our work. That has been true as we have deepened our experimentation and use of artificial intelligence (AI) tools. Since the release of ChatGPT, there has been equal parts excitement and concern about the benefits and risks of generative AI. At MacArthur, we believe that both are true.

As AI is more integrated into our daily lives—shaping how we learn, work, create, and interact—we need to be intentional about how it impacts our communities and advance AI in a way that centers the public interest. For the better part of a decade, MacArthur has been investing in organizations and networks undertaking research and policy development focused on addressing the social implications of AI through the Technology in the Public Interest program. That work continues.

We recently launched a new Big Bet on AI Opportunity, with the belief that we can still shape how and where AI shows up on our lives, by expanding who creates, uses, and benefits from it. We also know this work cannot be done alone. That is why MacArthur is part of Humanity AI, a broad coalition of ten of the nation’s most dynamic foundations seeking to shape a more human(e) future with AI through a $500 million five-year initiative.

Internally at MacArthur, we seek ways to use AI to make our work more creative, challenge our thinking, help us learn, and ultimately improve conditions for our grantee partners and our communities.

We are about to enter the next chapter of our work with AI, shifting from experimentation to an aligned strategy that centers the Foundation’s values.

How We Use AI

So how does MacArthur use AI?

We believe that policies are statements of values, and our vision for AI is one that is rooted in our values. In 2023, we established our first policy for Use of Artificial Intelligence, which we share publicly on our website. We co-created this policy, after research and conversations with internal departments, external peers, and experts. We update it as we learn and identify new needs.

Within our policy’s guardrails for security and disclosure, Staff can choose how and when they engage with AI tools. We do not require people to use AI. We ask people to use it when it is meaningful to them. We have open conversations about how to use tools more effectively, what ethical questions arise, and how to approach difficult situations.

 

“We will never use AI to make a grant, hire an employee, write a strategy, or undertake other significant decisions that require human judgement.”

We have a careful vetting process for data security, among other considerations, as we pilot AI tools. We laid out safeguards to protect our data; we identified how to disclose when AI is used, and when not to use AI. And our standards for disclosure apply equally to our Staff use and external vendors, grantees, and partners.

Most importantly, we will never use AI to make a grant, hire an employee, write a strategy, or undertake other significant decisions that require human judgement. One reason for this is simply that AI lacks integrity, and it can produce false and misleading information.

Where we are using AI to improve our work is evolving as we learn. Staff have used it to improve productivity and find new insights. Some promising use cases include using AI tools to:

Analyze our standard operating procedures to identify bottlenecks;
Reformat massive amounts of data to be more legible (including citations for fact checking);
Identify patterns in data;
Review archives to find lessons and stories we may have overlooked; and
Act as an input into aspects of scenario planning and strategy development.

What Have We Learned?

As we have piloted AI tools, we have found curiosity and knowledge sharing to be our biggest assets.

We offered access and training for AI tools to all Staff and have established ongoing learning opportunities. A recent Staff survey found that most respondents have saved time using AI tools. Also notable is that many users found that AI tools helped clarify their thinking and enhanced their creative capacity. Importantly, Staff also surfaced concerns about the environmental impacts of AI as well as ongoing challenges with AI systems demonstrating bias of different kinds.

 

“We have found curiosity and knowledge sharing to be our biggest assets.”

Our inclusive approach helps everyone take advantage of the benefits of new technology and ensures no one is left behind. Use cases are starting to emerge that promise to help our work at a strategic level. Staff have created chatbots to challenge ideas and improve research, to pressure test arguments and strategy, and help make ideas more compelling.

We have collaborated with peers through organizations like the Technology Association of Grantmakers and The Communications Network. Many of us are learning similar lessons about how we can take advantage of this technology as well as its limitations. For our storytelling and communications, for example, we found the Communications Network AI Summit aligned with our experience: AI is not a panacea; it can make our jobs easier; and we still need humans at every stage of the creative process.

Centering People

Our values and our mission demand that we seek to shape AI in a way that benefits people and the planet. We continue to ask questions and look for solutions to the downsides of AI, while also investing in opportunity.

We are exploring collaborative learning communities with our peers and our Humanity AI partners, supporting the arts, labor and work, democracy, education, and security to drive new investments toward establishing a people-driven future where AI delivers for humanity, strengthens communities, and enhances human creativity. We recently announced $10 million grants aligned with one or more of Humanity AI’s areas of focus.

 

“Our values and our mission demand that we seek to shape AI in a way that benefits people and the planet.”

We hope to further advance how the charitable foundation and nonprofit sector can share knowledge, skills, and learning to use these digital technologies and tools. Over the past two years, we have been focused on guardrails, pilots, and exploration to build shared understanding. Over the next two or so years, we will be connecting governance and operations and will continue to share our experiences within philanthropy, in hopes of meaningfully contributing to field-level transformation.