The rise of cyber attack incidents has posed a threat to the national security of most crypto-friendly states. The damages caused by cyber attacks forced giant tech firms to collaborate and address the issue.
In a recent announcement, the team behind ChatGPT OpenAI partnered with Microsoft to bring down bad actors in the financial and crypto industry. The two companies confirmed that the partnership aims to combat state-affiliated cybercrime.
Rise of Crypto Scam
After a thorough investigation, the OpenAI and Microsoft team noted that the largest cyber attacks were conducted by Chinese-based illicit groups, including Charcoal Typhoon and Salmon Typhoon and the Russian hackers Forest Blizzard.
Also, the two partners observed that some of the cyber attacks were launched by the Iran-based criminals Crimson Sandstorm and the Russian illicit group Emerald Sleet. In their findings, the OpenAI and Microsoft teams noted that the criminals have advanced their techniques to conduct cyber-related attacks.
Occasionally, notorious criminals use AI generative tools such as the GPT-4 to execute their unlawful activities. Apart from this, cybercriminals have advanced their skills in code debugging, phishing campaigns, and malware detection technology to gain unauthorized access to organization systems and customers’ data.
In the report, the OpenAI and Microsoft teams observed the common use of advanced cybersecurity tools and radar technology research among criminals. With the advancement in tech, it was evident that hackers were proficient in satellite communication and technical paper translation.
OpenAI Teams Up with Microsoft to Address Cyber Attack
The OpenAI team stated that despite the security measures implemented, the hackers still find their unique ways to compromise a system. Based on the damage caused by cybercrime, the regulators ordered the suspension of the accounts under investigation.
In the blog post, the OpenAI team noted that regulators had suppressed the operation of five state-affiliated illicit groups that leveraged the power of AI in conducting malicious attacks.
The ongoing legal action against the suspected groups forced the OpenAI team to devise a new method for detecting unlawful activities. This new approach will encourage knowledge sharing between law enforcers and companies to assist in apprehending the bad actors looming in the crypto industry.
A statement from an OpenAI representative demonstrated that most generative AI devices are used to improve people’s quality of life. The representative acknowledged that OpenAI has diversified its products to meet the needs of students, trainers, and persons with disabilities.
He noted that despite the benefits associated with AI products, the bad actors are seeking to undermine the delivery of essential services.
Tech Firm Explore Strategies to Address Crypto Crime
In a separate report, the Microsoft spokesperson confirmed the prevalence of cyber attacks. The giant tech firm has invested in developing a security graph that supports the technical team to identify and monitor unlawful activities.
The executive stated that the advanced security tools help the Microsoft team address security concerns even before they occur. Since Microsoft has attained the desired market coverage, the tech company uses multiple trillions of signals that report imminent security threats.
He explained that the Microsoft signal consists of vital information extracted from the security graph. Primarily, the security graph plays a significant role in gathering information and processing the data to alert the technical team of security concerns.
The spokesperson confirmed that Microsoft’s corrective measures aim to protect customers from external attacks and cyber-related crimes. He added that despite the security measures implemented by the tech firm, it was impossible to address some of the cyber attacks.
Elsewhere, the OpenAI team vowed to continue innovating new ways to address cyber crime. The OpenAI team plans to join forces with the regulators and other tech firms to scrupulously investigate cyber-related crime.
Also, the OpenAI team plans to leverage its diverse expertise to develop new ways that will limit the activities of the bad actor. In particular, the OpenAI team plans to create an environment that scares away malicious actors by upgrading the existing systems to identify any suspicious activities.