Survival and Flourishing
.Fund (SFF)

SFF-2024 Mechanisms for Flexible Hardware-Enabled Guarantees (flexHEGs) - Grant Round Announcement

SFF is launching a funding round that targets proposals aiming to advance the R&D of hardware-enabled governance mechanisms targeted at AI accelerators used for large-scale AI training. Examples of such mechanisms are provided in these reports from RAND and CNAS.

This funding round is specifically focussed on research that demonstrates the feasibility and advances the technical maturity of “Flexible Hardware-Enabled Guarantee” (flexHEG) mechanisms, as discussed in this interim report, or in more compressed form in this memo. We encourage you to read at least the memo, and ideally the interim report, before applying.

FlexHEG mechanisms are designed to enable multilateral, privacy-preserving and trustworthy verification and automated compliance guarantees for agreements regarding the development and use of advanced AI technology.

Motivating this framework is the possibility that future AI systems could pose serious risks to public safety and international security. Therefore, public oversight of powerful AI systems and AI-enabled institutions would be broadly beneficial. Previous research has identified hardware-enabled governance mechanisms as a promising route to promoting such oversight, including mechanisms that are privacy-preserving, thereby enabling a more targeted approach to preventing misuse of high-performance AI chips. While research on hardware-enabled governance mechanisms is relatively new, it connects to many existing lines of research in the fields of hardware security, cryptography and IC design, such as research on secure physical enclosures, tamper-response and confidential computing.

In this funding round, we are looking to accelerate the prototyping and iterative development of flexHEG-enabled chips; that is, hardware and software solutions for high-performance computing devices that would:

  1. enable the on-chip, privacy-preserving verification of compliance with mutually-agreed-upon compliance policies for the devices (such as limiting the size of AI training runs in FLOPs or FLOP/s, requiring computations above a certain size to possess a valid license or to incorporate a standard evals protocol into their computation graphs of training, etc.),

  2. provide a secure, multilaterally verifiable mechanism for updating these compliance policies, with the desired flexibility to express a wide range of computationally tractable compliance checks, and

  3. afford high confidence that no compliance violations will occur.

This grant round will run independently from the three tracks of our Main Round, for which applications closed on July 15th. It will coordinate funding from several funders and have its own set of recommenders. We estimate that $1MM-$4MM in funding will be distributed in this round.

Applications are due by the extended deadline of October 6th, 2024, EOD (anywhere in the world). Applications can be submitted after this date, but we can not guarantee that they will be considered for funding. We expect grants will be awarded by the end of December 2024. You can apply as an individual, charity, or for-profit (seeking non-dilutive funding).

Apply by submitting this form.

For questions, reach out to nora@survivalandflourishing.fund, or attend a live Q&A session on Fri, Sep 20th, 10.30am PDT/1.30pm EDT/ 6.30pm BST at this link.

To learn more about the scope of this round, keep reading below.

About Mechanisms for Flexible Hardware Enabled Guarantees

As AI advances, the potential for catastrophic risks resulting from accidents, misuse or loss of control over dangerous capabilities increases. For example, severe misuse in domains such as disinformation, cyber-attacks and bioterrorism seems plausible within the next few years. As such, governance of AI technology — whether by state governments, industry self-governance, intergovernmental agreements, or all three — is a crucial capacity for humanity to develop, and quickly.

Hardware-enabled governance has emerged as a promising pathway to help mitigate such risks and increase product trust, by providing a means to implement safety measures and regulation directly onto high-performance computing chips. Examples of commercial and regulatory capabilities this would unlock include: terms of service enforcement, limiting the size of AI training runs, requiring valid licenses for running computations of a certain size range, requiring a standardized evals protocol to be incorporated into the computation graph, etc. These hardware-enabled mechanisms would be added to powerful AI accelerators used in datacenters, not on anyone’s personal devices.

However, it is not yet clear which compliance rules will be most appropriate in the future. Therefore, these hardware-enabled governance mechanisms should allow for the flexible updating of compliance rules through a multilateral cryptographically secure input channel, without needing to retool the hardware.

Furthermore, successful AI governance by any system of governance depends on compliance guarantees. That is, it is crucial for outsiders of an AI institution (such as governments or collections of governments) to be able to verify that the institution is acting in accordance with agreed-upon rules. In a multilateral context, it is similarly important that the different parties have the means to verify that other parties are conforming with the agreed-upon rules. This compliance verification should occur locally and in a privacy preserving manner, thereby removing the need for any centralized “chip registries”, geolocation capabilities or even human inspections, in order to further reduce strategic concerns with AI governance proposals. Such locally-implemented compliance verification also enables qualitatively improved safety and security by allowing rule violations to be prevented from occurring in the first place. To ensure that all parties could trust these mechanisms, they would be fully open source so that their integrity can be validated via external audits.

Through its trustworthy and secure design, flexHEG could enable genuinely multilateral control over AI technology, thus making it possible for a range of stakeholders to agree on a variety of potential rules, from safety rules to robust benefit-sharing agreements. Mutually agreed-upon rules could be set and updated through a multilateral and cryptographically secure mechanism in order to guarantee that only agreed-upon rules are applied. Guaranteeable chips would also enable various parties to make specific cryptographically verifiable claims that could prove compliance with agreements.

Mechanisms for hardware-enabled governance should only be implemented in a form that would address concerns about privacy, security, and risks of regulatory overreach. These issues are discussed at more length in Appendix B of the interim report.

To ensure that all parties could trust these mechanisms, on top of being made fully open source and auditable by third parties, they would need to be robust to tampering even from state-level adversaries. This does not require making it impossible for nation-state adversaries to perform successful physical attacks on these mechanisms, but it does likely require raising the cost of such attacks to a level where these would be unattractive. Given that the training of frontier AI systems currently requires tens of thousands of high-performance AI chips, security solutions that make it expensive to scale attacks against on-chip mechanisms to this level (for example, by requiring expensive physical interventions on each chip with a high failure rate) appear feasible.

To ensure privacy and security, the flexHEG design would require no data to be sent from AI chips to a remote source without permission and review by the device owner, and instead have the source broadcast policies to be implemented via the on-chip firmware. This would result in a flow of information that is largely one-way from the source to the chips rather than vice-versa, and that no information can be secretly collected about the accelerator without the device operator being aware. The use of Trusted Execution Environments and similar secure environments could help to further reduce risks to privacy and security.

This funding call is targeted at consolidated R&D efforts to increase the Technical Readiness Level of required technical components (typically starting from a TRL of 2-4, to a desired TRL of 6-7) and to build & document compelling prototypes within 12 months.

The goal of this R&D effort is to lower the barriers to adoption of such a technology – both for the hardware firms involved in producing it, and for the governmental, intergovernmental, or industrial self-governance processes involved in requiring its use in some contexts. Demonstrating flexHEGs’ technological viability would be of substantial strategic and economic interest, bolstering the belief that the successful implementation of AI governance is feasible.

Technical Desiderata

Having discussed the core motivation for flexHEG, this section will provide further detail about the technical requirements the envisioned solutions should meet. More details on a potential approach for designing a technological stack with the stated desiderata, as well as remaining open research questions and technical risks, can be found in this interim report.

At its core, we wish to demonstrate feasibility of the following core capabilities:

  1. a cryptographically certified and updatable firmware lawyer which checks on-chip for compliance with a flexible set of rules controlled by a multilateral cryptographically secure input channel; which is

  2. embedded in a hardware context which prohibits any attempts to bypass the compliance-checking mechanisms, such as through tamper-responsive mechanisms that render the device inoperable in the event of bypass attempts.

We envision that these capabilities can be achieved through the interplay of a set of different components.

In order to achieve desideratum (1), we need:

For desideratum (2), we need:

About this Grant Round

What is within scope for this funding round?

We welcome any proposals that directly target one or several of the critical functionalities we seek to demonstrate as part of flexHEG.

The following is a non-comprehensive list of the types of projects we look to fund:

What is out of scope for this funding round?

In this funding round we won’t fund:

Note that projects that fall out of the scope for this funding call will not be considered for funding.

Project completion & delivery

The projects funded in this grant round should specify in their application clear conditions for successful completion and clear delivery dates. Example project deliverables could include:

If we find that an otherwise promising application has insufficiently specified its success criteria, we may in some cases ask you to further specify them as part of the grant agreement, before awarding the grant.

If we fund you, we might occasionally reach out to you for project updates and possibly to provide feedback. You will also have the opportunity to interact with a light-touch community consisting of other flexHEGs creators.

By default, awarded grants will come with an obligation to release all intellectual property open-source, open-access (CC-BY), and under permissive software licences (MIT + Apache 2), as applicable. (Notable exceptions to this policy include, for example, the design of tamper-evidencing components, whose effectiveness might be harmed by open-sourcing.)

Speed incentives

Due to the time-sensitivity of this line of work, we will often be interested in “trading money for speed”, i.e., seeing a project completed sooner, even if that means at a higher cost. As part of the grant agreement, we might propose an additional speed based award for successful completion at one or several agreed upon time thresholds. The application form gathers information relative to your guesses as to how you might be able to spend money for additional speed, compared to your default plan/timeline. We encourage you to think creatively about ways you could deliver your project faster.

FAQ for Applicants

Can I apply as a for-profit?

Yes. Companies seeking grants using this form must already be incorporated, have a company bank account, and be ready to receive non-dilutive funding in the form of a grant, accompanied by a letter like this: Award Letter Example for SFC Grant Recipient

Can I apply as a charity?

Yes. Non-profit organizations seeking funding using this form must already have charity status, or be hosted or fiscally sponsored by an organization with charity status. Otherwise, you should apply as a for-profit or an individual.

I/my entity is located outside the US. Can I apply?

Currently, we can grant to for-profit companies and individuals in only the countries of the US, UK, Canada, and Australia.

We are able to grant to charities outside the US, UK, Canada, and Australia. However, we may require stricter commitments to open-source development, or other legal restrictions, to ensure responsible expenditure of funds from the perspective of US charity laws and norms.

Can I submit late / after the stated deadline?

It is possible to make late submissions to a round in progress, however we won’t guarantee that your submission will be evaluated. It will be up to our reviewers to decide whether to review late applications.

The extended submission deadline is October 6th, EOD (anywhere in the world).

Who will see my application?

Your application may be viewed by SFF’s Fund Advisors, our affiliates, or anyone we choose to enlist in evaluating your application, for the present round and for any future rounds. We may also choose to share your applications with other funders if we think they might be interested in funding your work or retroactively evaluating your work during other funding decisions. Beyond that, we will not share your materials or our evaluations further unless you grant us permission to do so. In the application form, we will ask you some additional questions about your preferences around information sharing/disclosure.

What is our approach to intellectual property?

By default, awarded grants will come with an obligation to release all intellectual property as open-source, open-access (CC-BY), and under permissive software licences (MIT + Apache 2), as applicable. (Notable exceptions to this policy include, for example, the design of tamper-evidencing components, whose effectiveness might be harmed by open-sourcing.)

Do I need to submit a Speculation Grant request in order to be eligible for consideration in this grant round?

No.

For current and future SFF grant rounds, applicants need to submit a speculation grant request in order to be eligible. This round is an exception to that general policy.