Safety & privacy by design

The project follows the principles and framework of Trustworthy AI, emphasizing privacy by design and prioritizing data privacy from the start.

Content moderation guidelines

Large Language Models (LLMs) that are used to power AI chat-bots such as ours use data from the open-internet to retrieve answers to questions. We follow strict content moderation guidelines as well as engineer prompts that restrict unsafe behavior. We explicitly disallow hate speech, violence, sexual language, and any other type of speech that would be considered unsafe for children. However please keep in mind this technology is still at its infancy and we recommend the presence of an adult at all times.

Minimal data collection

Personal data, especially concerning children, is strictly limited. During sign-up, only an email address is required, and users are encouraged to create fictional usernames instead of sharing real identities.

Secure data handling

Robust security measures, including internet gateway protection, restricted security groups, and multi-layered database security, are implemented to safeguard user data. Continuous monitoring, such as with AWS GuardDuty, detects and responds to malicious activity. The project actively analyzes and enhances security measures to protect user data. Mini Studio will never give away data to any third party. Currently no chat data is being stored, monitored or saved.

Informed user consent

Users explicitly give consent before their creations are shared on social media or community pages, ensuring they have control over their data and its usage.

Additional measures

Programmable guardrails for conversational systems Sentiment analysis to guide conversations to positive and friendly tone. Training our own models based on children’s books.