According to a new policy document from Meta, the Frontier AI Framework, the company might not release AI systems developed in-house in certain risky scenarios.
The document defines two types of AI systems that can be classified as either “high risk” or “critical risk.” In both cases, these are systems that could help carry out cyber, chemical or biological attacks.
Systems classified as “high risk” might facilitate such an attack, though not to the same extent as a “critical risk” system, which could result in catastrophic outcomes. These could include, for example, taking over a corporate environment or deploying powerful biological weapons.
This story originally appeared on Computerworld