legislation in the works that mandates that companies that spend more than $100 million on training a “frontier model” in AI — like the in-progress GPT-5 — do safety testing. Otherwise, they would be liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents.
Are those models made by companies that would be affected based on the conditions above?
All models are very costly regardless of open source or closed source, but I’m not sure any current model reaches that high. The 100$ million seems to only applies to the cost of computing and not of buying the actual cards.
The legislation is essentially asking that it can’t make nukes or do massive hacking attacking and only asking it of people that definitely have the money to make sure.
It’s actually very level headed compared to what most are pushing for. I can’t even see it affect current gen AI, which are mostly harmless anyways.
Are those models made by companies that would be affected based on the conditions above?
All models are very costly regardless of open source or closed source, but I’m not sure any current model reaches that high. The 100$ million seems to only applies to the cost of computing and not of buying the actual cards.
The legislation is essentially asking that it can’t make nukes or do massive hacking attacking and only asking it of people that definitely have the money to make sure.
It’s actually very level headed compared to what most are pushing for. I can’t even see it affect current gen AI, which are mostly harmless anyways.