Embedding fraud scoring
September 30, 2025 · 4 min read
A fraud score that only counts bad signals misses half the picture. A login from a datacenter IP is suspicious. The same login from a datacenter IP on an account that has logged in from that IP every weekday for a year is a developer at work. A new device is a risk signal. A new device on an account with a five-year history, a corporate email domain, and consistent geographic patterns is someone who bought a new phone.
Effective fraud scoring considers both negative and positive signals. The risk indicators (unfamiliar device, proxy connection, disposable email) pull the score in one direction. The trust indicators such as established account history, consistent login patterns, reputable email domain, device used across many previous sessions, pull it in the other. The score reflects the balance between the two, and that balance is what separates a genuine risk from a false positive.
This sounds obvious, but most fraud scoring systems make it difficult in practice. Proprietary platforms produce a number. The number goes up when bad signals appear. What you cannot see is how trust signals are weighted, whether they are weighted at all, or why a specific combination of signals produced a specific score. When a legitimate user is blocked, tracing the cause means opening a support ticket with the vendor and hoping their response is more informative than the score itself.
Fraud scoring that you can read, review, and amend on the same day a problem appears is a different kind of tool. That is what self-hosted, open-source scoring provides.
Why you need to see and change the rules
Fraud patterns are specific to your product. A marketplace has different risk signals than a self-hosted application. A platform with a global user base has different geographic norms than one serving a single country. A product with many legitimate VPN users needs different IP-category weights than one where VPN traffic is rare.
A proprietary scoring vendor trains their model on aggregate data from their customer base. The model reflects average fraud patterns across many products, not the specific patterns targeting yours. When the model produces a false positive, you cannot see which rules fired or why. When a new attack targets your registration flow specifically, you cannot write a rule to address it. You wait for the vendor's next model update.
The ability to review and amend your scoring rules on the same day you identify a problem is the practical difference between scoring you own and scoring you rent. A rule that was appropriate last month may be generating false positives today because attack patterns shifted. A signal that was irrelevant at launch may become critical as your user base changes. Scoring is not something you configure once. It is something you maintain continuously, and maintaining it requires access to the logic.
How tirreno handles scoring
tirreno is an open-source security framework that runs entirely on your infrastructure. It provides initial risk scoring out of the box through built-in rules and presets, and lets you create your own rules for your specific needs.
Weights can be positive or negative. An established email domain with breach history increases trust. A datacenter IP decreases it. The score for any event reflects both sides.
The contribution of each rule to any user's score is traceable. When a legitimate user is incorrectly scored, you open the user's profile, see which rules fired and with what weights, and adjust. When a new fraud pattern appears that no existing rule covers, you write a rule, assign a weight, and deploy it. The cycle from identifying a problem to having a rule in production can be hours, not weeks.
Scoring is the foundation you build on
The value of owning your scoring infrastructure compounds over time. The behavioral data you collect, the rules you write, the thresholds you tune, and the patterns you learn about your specific user population are institutional knowledge. Every adjustment makes the next one easier because you understand your data better.
On-premises scoring also means the infrastructure is already in place when your needs grow. A team that deploys tirreno for registration fraud scoring has the event pipeline, the rule engine, and the behavioral profiles needed to add account takeover detection, bot monitoring, API abuse scoring, or insider threat detection. Extending coverage means applying a new preset and adding event types, not evaluating a new vendor.
Starting with a SaaS scoring vendor and migrating later means leaving that accumulated knowledge behind. Your detection history, your tuned weights, your learned patterns, all of it lives on the vendor's infrastructure. Migration means starting over. The earlier you own your scoring, the more value you accumulate from it.
Getting started
Install. Deploy a tirreno instance on any server or container you control. The administration guide covers setup and configuration.
Send your first events. Send events from your application for logins and registrations. Include user identifier, email, IP, user agent, and event type. tirreno expects a username with each event. The developer guide has the API schema.
Apply a preset. Pick the preset that matches your most pressing concern (account_registration, account_takeover, fraud_prevention) and activate it from the rules page. Browse the activity page to see how your traffic is being scored.
Review and adjust. Look at the scores your real traffic produces. Find the false positives. Trace which rules contributed. Adjust the weights. This is the work that makes scoring accurate for your product, and it is work that only you can do.
Download at tirreno.com/download. A live demo is at play.tirreno.com.