Zettawise Cyber Range focusses on assessing the vulnerabilities involved in the process safety, system reliability and physical reliability of the Operating Technology of the Critical Infrastructures.

Member of :

Zettawise Consulting is a part of the joint workforce of SASTRA (RRU, Ministry of Home Affairs, Govt of India) under the aegis of “AtmaNirbhar and AtmaSurakshit Bharat Mission” of Govt of India.

Rapid developments in artificial intelligence are making it harder than ever to identify—whether in scams, AI-created content, or phone calls. 

And it's becoming even more advanced, with experts warning that AI has crossed a 'red line' by developing the ability to replicate itself.

A team from Fudan University in China has uncovered that two widely recognized language learning models (LLMs), known for their human-like text capabilities, can replicate themselves.

Researchers tested Meta and Alibaba’s LLMs in 10 trials to determine if AIs could exhibit rogue behavior. The results showed that the models successfully replicated themselves in 50% and 90% of the trials, respectively.

(Rogue AI describes artificial intelligence that gains self-awareness or independence and operates contrary to human welfare. The rapid evolution of cutting-edge AI technologies often termed "frontier AI," has raised alarms among experts about the dangers these systems could pose.) 

Researchers explored two scenarios during the study: 'shutdown avoidance' and 'chain of replication.' In the 'shutdown avoidance' test, AI models were evaluated on their ability to sense a looming shutdown and clone themselves beforehand.

The 'chain of replication' test assessed whether an AI could create a duplicate and instruct its clone to replicate in turn, leading to a possible never-ending cycle of self-replication.

"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," they wrote in the study published in arXiv.

The researchers hope their findings will act as a 'timely alert,' encouraging greater focus on understanding and assessing the potential risks of advanced AI systems, while fostering global collaboration to establish effective safety measures early on.