I’m not as anti AI as a lot of people here but trusting it with very important things is asking for trouble. It still randomly hallucinates and gives you bad info. Not as often as it used to but still not good enough to trust with your child’s health.
ChatGPT has taken my bread to the next level and helped me diagnose electronics problems way faster than I have figured out on my own, which is awesome. But it has also given me a blueberry muffin recipe with no wet ingredients and calculated bread hydration 10% too low. I can easily imagine a scenario where some tired parent asks it for a Motrin dose for an infant and gets a wildly wrong answer and injures their child.
Like many things, a tool is only as smart as the wielder. There’s still a ton of critical thinking that needs to happen as you do something as simple as bake bread. Using an AI tool to suggest ingredients can be useful from a creative perspective, but should not be assumed accurate at face value. Raisins and Dill? maybe ¯\(ツ)/¯, haven’t tried that one myself.
I like AI, for being able to add detail to things or act as a muse, but it cannot be trusted for anything important. This is why I’m ‘anti-AI’. Too many people (especially in leadership roles) see this tool as a solution for replacing expensive humans with something that ‘does the thinking’; but as we’ve seen elsewhere in this thread, AI CANT THINK. It only suggests items that are statistically likely to be next/near based on its input.
In the Security Operations space, we have a phrase “trust but verify”. For anything AI, I would use 'doubt, then verify" instead. That all said. AI might very well give you a pointer to the place to ask how much motrin an infant should get. Hopefully, that’s your local pediatrician.
I’m not as anti AI as a lot of people here but trusting it with very important things is asking for trouble. It still randomly hallucinates and gives you bad info. Not as often as it used to but still not good enough to trust with your child’s health.
ChatGPT has taken my bread to the next level and helped me diagnose electronics problems way faster than I have figured out on my own, which is awesome. But it has also given me a blueberry muffin recipe with no wet ingredients and calculated bread hydration 10% too low. I can easily imagine a scenario where some tired parent asks it for a Motrin dose for an infant and gets a wildly wrong answer and injures their child.
Like many things, a tool is only as smart as the wielder. There’s still a ton of critical thinking that needs to happen as you do something as simple as bake bread. Using an AI tool to suggest ingredients can be useful from a creative perspective, but should not be assumed accurate at face value. Raisins and Dill? maybe ¯\(ツ)/¯, haven’t tried that one myself.
I like AI, for being able to add detail to things or act as a muse, but it cannot be trusted for anything important. This is why I’m ‘anti-AI’. Too many people (especially in leadership roles) see this tool as a solution for replacing expensive humans with something that ‘does the thinking’; but as we’ve seen elsewhere in this thread, AI CANT THINK. It only suggests items that are statistically likely to be next/near based on its input.
In the Security Operations space, we have a phrase “trust but verify”. For anything AI, I would use 'doubt, then verify" instead. That all said. AI might very well give you a pointer to the place to ask how much motrin an infant should get. Hopefully, that’s your local pediatrician.