How Boba's AI Moderation Protects You Before You Even See a Message

How Boba's AI Moderation Protects You Before You Even See a Message

Faith Ajan
Faith AjanAuthor
February 4, 2026
6 min read

Romance scams cost Americans close to $700 million in 2024, according to FTC data. The median loss per victim? About $2,000. And those are just the ones that get reported.

International dating sites get hit the hardest. Scammers know the playbook: create a fake profile, build an emotional connection over weeks, then start asking for money. A sick family member. A plane ticket to come visit. An emergency that only you can help with. By the time most platforms catch on, the damage is done.

We built Boba's safety system to catch these people before they get to you. Here's how it works.

Two Stages, Not One

Most dating apps handle moderation the same way: someone reports a message, a human reviewer looks at it (eventually), and if it's bad enough, the account gets banned. That's reactive. The scammer already sent the message. You already read it. You might have already started believing them.

Boba runs two separate AI checks on every single message, image, and voice recording before it ever reaches your screen.

Stage 1 scans the content itself. Text gets analyzed for harmful content, financial requests, suspicious links, and explicit language. Images go through NSFW detection that scores them on a scale, not just a yes/no. Voice messages (Yaps) get transcribed and run through the same text analysis. All of this happens in real time.

Stage 2 is where it gets smarter. A second AI layer looks at the bigger picture. It reads the conversation history, not just the latest message. It's looking for patterns: Is this person love bombing? Are they slowly building toward asking for money? Have their stories been inconsistent? Are they trying to move you off the platform?

Each message on its own might look perfectly innocent. "My mother is in the hospital." That's a normal thing to say. But if the same person has been escalating emotional intensity for two weeks and this is the third personal crisis they've mentioned, the system connects the dots.

What Happens When Something Gets Flagged

Not all flags are treated the same. We built three different responses depending on what the AI finds.

Blocked content never reaches you. If someone sends something explicitly harmful or violating, it's stopped. The sender gets notified. You don't see anything.

Warning messages are the most useful one. When the AI detects a manipulation pattern, you see an inline warning from Ollie (our safety mascot) explaining exactly what triggered it. Not a vague "this message may be suspicious." Specific explanations like "This person is asking you to move to an external messaging app" or "This conversation shows signs of financial request escalation."

There's a one-tap report button attached to every warning. If something feels off, you can flag it immediately.

Why This Works Better Than What's Out There

FilipinoCupid and Cherry Blossoms rely on basic moderation. Manual reports, keyword filters, maybe some automated flagging for the most obvious stuff. There's no conversation-level analysis. No AI reading context across multiple messages. No real-time warnings before you've already been exposed.

Tinder, Bumble, and Hinge are better resourced, but their moderation is mostly built for a different problem. They're catching explicit photos and hate speech. They're not built to detect the slow, methodical manipulation that romance scammers use in cross-cultural dating, where the playbook unfolds over weeks and involves emotional grooming, not just one bad message.

Boba's system was designed specifically for this. Cross-cultural dating is where scammers do their best work, and that's exactly where the AI is focused.

It Covers Everything, Not Just Text

Text messages get analyzed. That's table stakes. But scammers adapt. If text gets moderated, they move to images. If images get caught, they try voice messages. If the platform doesn't have voice messages, they push you to WhatsApp where nothing is monitored at all.

On Boba, every communication channel runs through the same moderation pipeline. Send a photo? It goes through NSFW scoring before delivery. Send a Yap? It gets transcribed, and that transcription gets analyzed. Try to share an external link or phone number? Flagged. Every door is covered.

This is also why we built video calls, voice messages, and auto-translation directly into the platform. You never need to move to WhatsApp or Telegram to get features that Boba already has. And the moment you leave the platform, you leave the safety net. Scammers know this. It's why "let's move to WhatsApp" is usually their first request.

What It Doesn't Do

We're not going to pretend AI catches everything. It doesn't. A very sophisticated scammer who plays a slow, patient game and avoids all the common patterns could potentially get through. Cultural context is hard for AI. Sarcasm in Tagalog might read differently than sarcasm in English. Regional slang can confuse the system.

That's why AI moderation is a layer of protection, not the whole thing. You still need to use your judgment. If someone avoids video calls, that's a red flag. If someone asks for money, that's a dealbreaker regardless of what the AI says. If something feels wrong, it probably is.

The AI is there to catch what you might miss, especially the slow-burn manipulation that's hard to spot when you're emotionally invested. It's the friend looking over your shoulder saying "hey, this doesn't look right."

Your Privacy

A reasonable question: if AI is reading all my messages, what happens to my privacy?

The moderation system processes messages in real time for safety analysis. It's not storing your conversations in some database for advertising or data mining. Boba doesn't sell your data. The AI reads the message, makes a safety determination, and moves on. Your conversations stay between you and your match.

Why We Built It This Way

Most dating platforms add safety features as an afterthought. They build the matching and messaging first, then bolt on moderation when problems show up. We did it the other way around. Safety was baked into the messaging system from day one. Every message travels through the moderation pipeline before it reaches anyone.

It adds engineering complexity. It would have been easier and faster to skip it. But cross-cultural dating without strong moderation isn't a dating platform. It's a hunting ground.

That's not what we're building.