OpenAI has allowed individuals to build and share personalized versions of ChatGPT, referred to as “GPTs,” since early November. Numerous GPTs have emerged, serving various purposes, such as offering remote work advice, searching academic papers, or transforming users into Pixar characters.
Despite their versatility, these custom GPTs can inadvertently reveal sensitive information. Researchers investigating these chatbots have successfully extracted the initial instructions given during their creation and accessed the files used for customization. This raises privacy concerns, as personal data or proprietary information may be jeopardized.
Jiahao Yu, a computer science researcher at Northwestern University, emphasizes the significance of addressing privacy issues related to file leakage. He and his colleagues tested over 200 custom GPTs, finding it surprisingly easy to extract information from them. Their success rate was 100 percent for file leakage and 97 percent for system prompt extraction, using straightforward prompts without specialized knowledge.
Creating custom GPTs is designed to be user-friendly. OpenAI subscribers can generate these AI agents for personal use or publication on the web. The company envisions developers being able to earn money based on the popularity of their GPTs. To create a custom GPT, users simply message ChatGPT with instructions on the desired bot’s behavior. Specific guidance, document uploads for expertise, and integration with third-party APIs can enhance the chatbot’s capabilities.