Vulnerabilities in ChatGPT may expose users' personal information, foreign media reported

According to foreign media reports, on March 20, 2023, OpenAI's ChatGPT experienced a global failure, which aroused user concerns. However, after discovering a serious vulnerability in the service, OpenAI proactively disclosed details about the vulnerability.

OpenAI took ChatGPT offline after noticing a vulnerability that could violate user privacy, according to details shared. Specifically, the vulnerability affects the Redis client open source library, which allows chat information and titles to be exposed to others when users communicate. ChatGPT uses this library to cache user information, do connection recycling during requests, maintain a shared connection pool, and distribute load among multiple Redis instances.

It was revealed that the vulnerability occurs when an incoming request arrives in the queue and is canceled before the outgoing response is popped. If a request is canceled after it has been pushed to the incoming queue but before the response is popped from the outgoing queue, we can see our vulnerability: the connection thus becomes corrupted and the next response for an unrelated request can be received remaining on the connection data in . While the result in this case is mostly server errors, in a few cases users will see cached data from unrelated users. In most cases, this resulted in an unrecoverable server error, and the user had to try their request again. But in some cases, the corrupted data happens to match the data type that the requester was expecting, so the result returned from the cache looks valid even though it belongs to another user.

The breach occurred during a nine-hour window, between 1:00 a.m. and 10:00 a.m. Pacific Time. In addition to exposing users' conversations, the bug also exposed payment details of paid subscribers to other users. This can be a sensitive issue, as leaked details include full names, email addresses, billing addresses, the last four digits of credit card numbers and card expiration dates.

After discovering this vulnerability, OpenAI took ChatGPT offline and began to fix the vulnerability. They patched vulnerabilities and deployed additional security checks to ensure users got the desired response. The company has also identified users affected by the bug to notify them about the issue. The service also credits Redis for quickly fixing the vulnerability for ChatGPT users. Although the vulnerability has been patched, users (mainly paying subscribers) may want to consider contacting their bank for proper monitoring to avoid possible malicious transactions.

The above problems have sounded the alarm for us when using ChatGPT products. In use, we must do the following to avoid privacy leakage:

1. Limit the input of sensitive personal information

Try not to enter too much personal information in the ChatGPT conversation, such as ID number, bank card number and other sensitive information, to avoid unnecessary risks.

2. Strengthen network security awareness

Users need to realize the importance of network security, pay attention to the protection of passwords, change passwords regularly, do not expose passwords directly in the chat box, and also connect to ChatGPT in a safe way.

3. Encrypted transmission with the help of third-party tools

Users can also use third-party tools, such as encryption plug-ins or VPN tools to ensure the security of communication, thereby avoiding privacy leaks. When entering sensitive information, it is recommended to use end-to-end encrypted applications, such as Signal Messenger, etc. At the same time, users can also enable the browser's privacy mode to protect their data.

 

Guess you like

Origin blog.csdn.net/m0_37771865/article/details/130070295