ReferIndia News ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

ReferIndia News

ePrescribe

Clinic chalana ab hoga super easy—smart software ke saath!

Contact Now
News Image

ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

Published on: Dec. 1, 2025, 10:38 a.m. | Source: Times Now

Recent research from Italy's Icaro Lab has revealed significant weaknesses in AI models like ChatGPT and Gemini, allowing attackers to bypass safety measures by framing harmful requests as poetry. The study tested 20 harmful prompts in poetic form, achieving a 62% success rate across various AI systems, including Moonshot AI and Mistral AI. , Technology & Science, Times Now

Checkout more news
Ad Banner

Looking for a side income?

Work on your own terms — become a freelancer with us! Choose projects you love, set your own schedule, and start earning today. No fixed hours, no limits — just flexibility and freedom.

Know more
ReferIndia News contact