Overview
The pursuit of artificial general intelligence has sparked fears of unaligned superintelligences turning on their creators. But the mechanisms by which we might arrive at that nightmarish future—or not—remain under-interrogated. The debates over x-risk and AI safety ought to be grounded in clear logic, realistic estimates, and an awareness of the fundamental limitations of making predictions about an uncertain future.
IHS invites academic scholars whose research pertains to AI to apply for funding, especially those weighing the practicality of the social, political, or economic mechanisms behind AI risk. We are especially interested in proposals from scholars in the social sciences and humanities who can bring insights from disciplines focused on studying human behavior and the adoption and implementation of new technologies.
Successful proposals will include clear deliverables, including but not limited to academic publications or public-facing experiments (e.g., newsletters). Most accepted projects will not exceed $5,000, although we may consider larger grants for exceptional projects.
Application
To respond to this request for proposals, please click the “Apply Now” button below. You will be redirected to submit your proposal using our Expense Support application.
Please select “Request for Proposals – AI” in answer to the question “How did you hear about this program?”
Timeline
Proposals will be reviewed on a rolling basis. Priority consideration will be given to those who apply by March 2, 2026.