Child Safety Policy

Last updated: 21 February 2026

At Shuunnya we place the safety, dignity, and well-being of all our users at the heart of everything we do, especially for children and youth. We are committed to creating a secure, respectful, and nurturing environment in which learning, growth, and community connection flourish, free from exploitation, abuse, or harm.

This page outlines our standards and practices in relation to child safety, including our zero-tolerance approach to child sexual abuse and exploitation (CSAE), our reporting mechanisms, moderation, and compliance with relevant laws.

1. Scope and Purpose

This policy applies to all users of the Shuunnya platform, including the website, mobile app, community forums, private messaging, audio and video calls, member directory, AI assistant, and all other features and services.

The purpose is to ensure that every part of our service, including user-generated content, messaging, member interactions, profile discovery, media uploads, and automated tools, is governed by robust safeguards against CSAE, grooming, exploitation, and any unsafe behaviours.

2. Prohibition of Child Sexual Abuse and Exploitation (CSAE)

We have a strict zero-tolerance policy for any content or behaviour that involves or facilitates the sexual exploitation or endangerment of children.

This includes, but is not limited to:

  • Any content depicting or promoting minors in sexual acts or sexualised contexts (Child Sexual Abuse Material, CSAM), whether in public posts, private messages, media uploads, profile content, or any other form.
  • Any grooming, enticement, trafficking, or predatory behaviour targeting minors, including through private messaging, audio/video calls, or member discovery features.
  • Any facilitation of paedophilic content, solicitation of minors for sexual content, or exploitation of their images or data.
  • Any depiction of violence, neglect, or abuse of minors.
  • Using the AI assistant to generate content that sexualises, endangers, or exploits minors in any way.

Any violation of this standard will result in immediate removal of the offending content, suspension or termination of the user account involved, preservation of evidence, and prompt reporting to relevant law-enforcement and child-protection authorities.

3. Published Standards

We publish our child-safety standards on this page so that they are publicly available and visible to all users, developers, and regulators. This policy is referenced from our Terms of Service and Privacy Policy.

4. Safety Across Platform Features

Child safety safeguards apply across all areas of the Platform:

4.1 ShuunnyaEcho (Messenger) & Calls

  • ShuunnyaEcho messaging and audio/video call features are subject to the same safety standards as public content.
  • Users can block any member at any time, immediately preventing further contact through messages and calls.
  • Abusive behaviour in private communications can be reported and will be investigated with the same urgency as public violations.
  • We do not record the content of audio or video calls. However, call metadata (participants, timing) is logged and may be reviewed during investigations.

4.2 Member Directory & Community Features

  • The member directory and community matching features are designed for constructive connection. Any use of these features to identify, target, or contact minors for inappropriate purposes is strictly prohibited.
  • Users have full control over their profile visibility and can restrict who can view their profile (public, members only, or private).
  • Suspicious search patterns or interaction behaviours may be flagged for review.

4.3 Media & Image Uploads

  • All uploaded media, including profile photos, cover images, status media, and message attachments, is subject to moderation.
  • Any media containing CSAM or content that exploits minors will be immediately removed and reported to authorities.
  • Ephemeral content (such as 24-hour statuses) is subject to the same safety standards as permanent content. Auto-deletion does not exempt content from moderation or reporting.

4.4 AI Assistant

  • ShuunnyaAI includes safety guardrails designed to prevent the generation of harmful content involving minors.
  • Users must not attempt to use the AI to produce content that sexualises, exploits, or endangers children.
  • AI conversations are logged and may be reviewed if harmful use is suspected or reported.

4.5 Forums

  • Forum content is publicly visible to members and is actively moderated.
  • Posts, replies, and media that violate child safety standards are removed immediately.
  • Forum moderators have the authority to lock threads, remove content, and escalate concerns.

5. In-App Reporting Mechanism

We provide clear, accessible mechanisms for users to raise concerns, report unsafe content or behaviour, and give feedback.

  • The "Report" button is available on member profiles, statuses, comments, forum posts, and private messages.
  • The "Block" button is available on every member profile and within conversations, allowing users to immediately cut off contact with any member.
  • Users may also email our dedicated child-safety contact at: shubham@shuunnya.org.

Response commitments:

  • Reports involving suspected CSAE or CSAM are treated as the highest priority and reviewed within 24 hours.
  • During review, the reported content is preserved as evidence but may be hidden from public view.
  • The reported user's account may be temporarily suspended pending investigation.
  • All reports are logged, tracked, and maintained as part of our audit records.

6. Monitoring, Moderation & Escalation

  • We operate both automated and manual monitoring of user-generated content and interaction patterns to detect signs of grooming, exploitation, CSAM, or other unsafe conduct.
  • All user-initiated content (profiles, posts, messages, media) is subject to moderation policies, including removal of violating content and account suspension or termination in serious cases.
  • We maintain comprehensive audit trails of moderation decisions, administrative actions, and system operations to ensure accountability and prevent unauthorised access to user data.
  • For suspected CSAE or CSAM, we take immediate action: removal of content, disabling of the account, preservation of evidence, and referral to the appropriate authorities.

7. Reporting to Authorities

When we identify or receive a credible report of CSAE or CSAM, we will:

  • Immediately remove the offending content from the Platform and disable the involved account.
  • Preserve all relevant evidence (content, metadata, account information, interaction logs) for law enforcement.
  • Report the incident to the National Center for Missing & Exploited Children (NCMEC) CyberTipline where applicable, and to the relevant national or regional child-protection authority based on the jurisdiction of the user and the nature of the offence.
  • Cooperate fully with law enforcement investigations, providing requested information within the bounds of applicable law.
  • Maintain records of all reports made to authorities.

8. Compliance with Child Safety Laws and Regulations

We align our practices with relevant child-protection laws and regulations, including but not limited to:

  • COPPA (Children's Online Privacy Protection Act, United States): We do not knowingly collect personal information from children under 13 without verifiable parental consent.
  • GDPR Article 8 (European Union): We recognise the requirement for parental consent for processing personal data of children under 16 (or the applicable age in each EU member state).
  • POCSO Act (Protection of Children from Sexual Offences, India): We comply with Indian law regarding the protection of minors from sexual exploitation.
  • IT Act, 2000 (India): We comply with provisions relating to online content, data protection, and intermediary liability.

We recognise our responsibility under these laws to safeguard minors' data, restrict dangerous interactions, limit the collection of personal data from children without appropriate consent, and ensure age-gating where required.

9. Age-Gating and User Eligibility

  • Shuunnya is intended for users aged 13 years and above. Users are required to provide accurate age information at registration.
  • We reserve the right to review, limit, or remove accounts if credible reports or moderation flags indicate that a user may be below the permitted age.
  • If we determine that a user is under 13, their account will be suspended and their personal data will be deleted in accordance with applicable law.
  • Parents or guardians who believe their child has created an account without consent should contact us immediately for account removal.

10. Data Minimisation for Young Users

For users between the ages of 13 and 18:

  • We encourage young users to use the strictest available privacy settings.
  • We collect only the information necessary to provide our services.
  • We do not use personal data of minors for targeted advertising or profiling. Shuunnya does not serve advertising of any kind.
  • Parents or guardians may contact us to request access to, correction of, or deletion of their child's personal data.

11. User Safety Tools

We provide the following tools to help all users, especially younger users, stay safe:

  • Block: Immediately prevent any member from contacting you or viewing your profile. Blocking is bidirectional and takes effect instantly.
  • Report: Flag content or behaviour for review by our moderation team. Reports can be made on profiles, posts, comments, messages, and forum content.
  • Privacy Controls: Set your profile visibility to restrict who can view your information. Control visibility of specific details like email, phone number, birthday, and online status.
  • Session Management: View all devices where your account is active and log out remotely from any or all sessions.

12. Education & Awareness

We believe that safety is a shared responsibility. We are committed to:

  • Providing clear guidance within the Platform on safe online behaviour and the importance of protecting personal information.
  • Encouraging users to report any suspicious or uncomfortable interactions.
  • Making our safety tools (Block, Report, privacy settings) easy to find and use.
  • Maintaining this publicly accessible policy so that users, parents, guardians, and educators understand our commitments.

13. Contact for Child Safety Matters

Child Safety Contact:

We will respond to child-safety concerns with the highest priority and maintain records of all such communications. If you are a child or young person who feels unsafe, or a parent or guardian with concerns, please do not hesitate to contact us.

14. Review and Updates

We review this policy at least annually and whenever there are meaningful changes in our services, technologies, or applicable laws. The "Last updated" date at the top of this page reflects the most recent revision. Users will always see the current version via this public page.

Thank you for helping us maintain a safe and thriving community on Shuunnya.

Together, we make it better.