At Glimr, we take the safety of our young users seriously and enforce a zero-tolerance approach to child sexual exploitation and abuse (CSEA). We have developed comprehensive policies and guidelines to proactively identify and respond to such risks. Any behavior involving or potentially leading to the sexual exploitation or abuse of minors, whether online or offline, is strictly prohibited. This includes child sexual abuse, exploitation and trafficking, grooming, sharing unsolicited obscene material with minors, and child sexual abuse material (CSAM). Users engaging in such activities will be permanently banned from Glimr, and their actions may be reported to the appropriate authorities.
To ensure a positive experience for everyone, we have implemented extensive safety tools to keep our community secure from the moment users sign up. Our robust safety framework has earned recognition from industry experts as a leader in safety innovation.
Our safety tools and measures include advanced AI algorithms, comprehensive age verification (verifying 100% of our users), pioneering video and audio moderation, a team of 24/7 Safety Specialists proactively monitoring the platform, customizable tools for user protection, reporting features, and educational content.
To learn more about how we monitor and manage Glimr to prevent grooming and child sexual exploitation, please continue reading below
1.Sign-Up Process
When users download and sign up for Glimr, they provide the following information:
Mobile Number: Users verify their phone numbers through a confirmation code.
Date of Birth: Glimr requires users to be at least 18 years old and employs age gates to separate users into appropriate communities.
Name and Gender: Users must provide their real name during sign-up but can use a different username for their profile.
Profile Photo: Users upload a clear photo showing their face. Photos of other people, cartoons, or pets are not allowed as main profile pictures.
Glimr’s Community Guidelines are sent to all new users, who must read and accept them. Users can then add a bio, emojis, additional photos, and tags (e.g., favorite TV shows, games, music, or brands) to their profile.
2.Age and Identity Verification
Glimr ensures user identity and age through:
Discrepancy Detection: Identifying mismatches in age information during sign-up, on the profile, or in user interactions, triggering additional ID verification.
Device and Account Detection: Monitoring for multiple accounts created using the same device or phone number and prompting ID verification as necessary.
Google Image Search: Checking profile photos against online sources to detect fake or stolen images.
Phone Number Verification: Identifying fake numbers used during registration.
3.Community Guidelines
Our Community Guidelines ensure respectful interactions on Glimr. Topics include prohibitions on sexual behavior, harassment, nudity, violence, drugs, hate speech, and animal abuse.
We use AI and human moderation to monitor activity and enforce guidelines. For minor violations, we aim to educate users about the issue before taking action, encouraging positive behavior changes in real time.
4.Content Monitoring and Moderation
Glimr employs innovative tools combining AI and human moderation to detect and address inappropriate content:
Automated alerts for risky or guideline-violating content, such as words, photos, or videos, are sent to our Safety Specialists.
Profile content reviews ensure compliance (e.g., clear profile photos, non-violent bios).
Monitoring private messages to detect risky behavior or potential violations.
For violations, we take appropriate action.
Temporary or permanent bans based on severity.
Intervention in cases of self-harm or suicide discussions, offering support and notifying authorities for safety checks.
5.Report child sexual exploitation and abuse
How to Report:
We encourage all users to report inappropriate content or behavior directly. To do so, visit the user’s profile and click on the three dots in the upper right corner. From there, you can choose to submit a report.
Provide Context:
When submitting a report, users are encouraged to attach screenshots or media files to provide additional context. There is no limit to the number of attachments you can include.
Review and Action:
Our team of Safety Specialists reviews each report thoroughly and takes appropriate action to address validated violations.