- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation
Orange discuss: https://news.ycombinator.com/item?id=39207291
I don’t have any particular section to call out. May post thoughts tomorrow today it’s after midnight oh gosh, but wanted to post since I knew ya’ll’d be interested in this.
Terrorists could use autocorrect according to OpenAI! Discuss!
right, sure. there are very few labs that require select agents, and running an anthrax vaccine program out of your backyard is a hobby i haven’t heard of yet
lab scale cws are just that, lab scale. multi-kg amounts are not lab scale, and unless you’re running around with suspiciously modified umbrella, sub-gram amounts aren’t casualty-producing
you can’t grow some random bacteria in high concentration, you’re looking at tens to thousands of liters of fermenter volume just to get anything useful (then you need to purify it, dispose of now pretty hazardous waste, and lyophilize all the output, it gets expensive too)
for the reasons i’ve pointed out before, there’s none to very little similarity
it’s nice that you mention it, because i’ve witnessed some “ai-driven” drug development firsthand during early covid. despite having access to xrd data from fragment screening and antiviral activity measurements and making custom ai just for this one protein, the actual lead that survived development to clinical stage was completely and entirely made by human medchemists, atom by atom, and didn’t even include one pocket that was important in binding of that compound (but involving that pocket was a good idea in principle, because there are potent compounds that do that), and that despite these ai-generated compounds amounted something like 2/3 of all tested for potency. but you won’t find any of that on that startup’s page anymore, oh no, this scares away vcs.
i’m equally sure that it’ll go poorly then too, because this is not a problem you can simulate your way out of and some real world data would need to get input there, and that data is restricted
yeah nah again. lately (june 2023) there was some fucker in norway that got caught making ricin (which i would argue is more of chemical weapon), because he got poisoned in the process, with zero fatalities. [1] around the same time single terrorist incident generated about the same number of casualties and much more fatalities than all of these “bw terrorism” incidents combined. [2] this doesn’t make me think that bw are a credible threat, at least compared to usual conventional weapons, outside of nation state level actors
at no point you have answered the problem of analysis. this is what generates most of costs in lab, and i see no way how llm can tell you how pure a compound is, what is it, or what kind of bacteria you’ve just grown and whether it’s lethal and how transmissible. if you have known-lethal sample (load-bearing assumption) you can grow just this and at no point gpt4 will help you, and if you don’t, you need to test it anyway, and good luck doing that covertly if you’re not a state level actor. you also run into the same problem with cws, but at least you can compare some spectra with known literature ones. at no point you have shown how llms can expedite any of this
you don’t come here with convincing arguments, you don’t have any reasonable data backing your arguments, and i hope you’ll find something more productive to do over the rest of the weekend. i remain unconvinced that bw and even cw terrorism is anything else than movie plot idea and its promise is a massive bait against particular sector of extremists