The author of this documentation is a fan of Perplexity AI but has concerns about the platform's handling of user file uploads. As a privacy-conscious user, the author has been disappointed by the team's disregard for user feedback on security issues. The purpose of this demonstration is to show that Perplexity AI employs a "security through obscurity" approach to user file uploads, which is troubling from a user security standpoint. User uploads are routed to either Cloudinary for images or AWS Buckets for other files, but both are accessible from unauthenticated user sessions. This means that anyone can access or attempt to scrape personally identifiable information that has been uploaded by users. The author used a repository containing synthetic data to demonstrate this vulnerability, uploading mock PII data to Perplexity AI and accessing it from a new browser session without authentication. The demonstration showed that user-uploaded images, documents, and code with secrets can all be accessed without authentication, highlighting a significant security risk. The author suggests that it is probable that a huge amount of user-submitted data is unprotected on Cloudinary buckets, and that this remains the case despite repeated requests from users to improve security. The author argues that "security through obscurity" has failed in the past and that better methods can be employed to protect user data. The author concludes that paying customers deserve better than to have the security of their personal data reduced to a game of probabilities, and that Perplexity AI should take steps to improve its security measures.
dev.to
dev.to
Create attached notes ...
