Service Level Agreement

Last updated: March 2026

This Service Level Agreement ("SLA") is entered into between Ribon Inc., d/b/a Roster Giving ("Provider"), and the subscribing organization ("Customer"), and establishes the service levels, performance targets, and remedies applicable to the Roster Giving platform. This SLA is incorporated into and governed by the applicable Master Services Agreement or Order Form between Provider and Customer.

1. Definitions

Term Definition
Uptime Percentage of time the Service is available and responsive, measured by synthetic monitoring tests from multiple US locations
Downtime Period when the Service returns errors (5xx) or fails to respond within 30 seconds on 2+ monitoring locations
Scheduled Maintenance Pre-notified maintenance windows excluded from uptime calculation
Error Budget Remaining allowable downtime within the SLA period (monthly)
Service The Roster platform including web application, REST API, AI features, and payroll integration
P0 Incident Complete service outage or data loss affecting all users
P1 Incident Major functionality degraded for majority of users
P2 Incident Minor functionality impaired, workaround available
P3 Incident Cosmetic or non-critical issue

2. Service Level Targets

2.1 Availability

Metric Target Measurement
Monthly Uptime 99.9% (43.8 min max downtime) Synthetic test success rate at 1-minute intervals
Planned Maintenance Max 4 hours/month Maintenance window tracking

2.2 Performance

Metric Target
API Response Time (p95) < 500 ms
AI Response Time (p95) < 5,000 ms
Page Load Time (p95) < 2,000 ms
CSV Import Processing < 5 minutes (10 MB file)

2.3 Data Integrity

Metric Target
Recovery Point Objective (RPO) < 24 hours (daily backup) / < 1 second (PITR)
Recovery Time Objective (RTO) < 4 hours (database) / < 30 minutes (application)
Data Loss Zero for committed transactions

2.4 Support Response Times

Severity Acknowledgment Update Frequency Resolution Target
P0 — Critical < 15 minutes Every 30 minutes < 4 hours
P1 — Major < 1 hour Every 2 hours < 8 hours
P2 — Minor < 4 hours Daily < 5 business days
P3 — Low < 24 hours Weekly Best effort

3. Measurement Methodology

3.1 Uptime Measurement

Uptime is measured using industry-standard synthetic monitoring:

  • API tests running every 1–10 minutes from multiple US locations
  • Browser tests validating critical user journeys (login, dashboard, AI chat)
  • Downtime declared when 2+ locations report failure within a 5-minute window
  • Excludes: scheduled maintenance windows, force majeure events

3.2 Performance Measurement

Performance is measured using complementary data sources:

  • Real User Monitoring (RUM): Actual user experience data from production traffic
  • Synthetic Tests: Controlled baselines from standardized test locations
  • Application Performance Monitoring (APM): Server-side latency with distributed tracing

3.3 Data Retention

Data Source Retention
SLOs 15 months
Real User Monitoring 15 days (sessions), 15 months (metrics)
APM 15 days (traces), 15 months (metrics)
Synthetic Tests 15 months

4. Reporting

4.1 Real-Time Dashboard

  • SLO dashboard shared with Customer via secure link or guest account
  • Displays current uptime %, error budget remaining, and active incidents
  • Updated continuously

4.2 Monthly SLA Report

Delivered via email (PDF) on the 1st business day of each month to designated Customer contacts. Contents include:

  • Monthly uptime percentage
  • Performance metrics (p95 latencies)
  • Incident summary (count by severity, MTTR)
  • Error budget status
  • Service credit calculation (if applicable)
  • Maintenance window log

4.3 Incident Notifications

  • P0/P1: Immediate notification via status page and email
  • Status Page: Real-time updates available to Customer
  • Post-Mortem: Published within 5 business days for P0/P1 incidents

5. Service Credits

5.1 Credit Schedule

Monthly Uptime Credit (% of Monthly Fee)
99.9% or above No credit
99.0% – 99.89% 10%
95.0% – 98.99% 25%
Below 95.0% 50%

5.2 Credit Terms

  • Credits calculated automatically in monthly SLA report
  • Maximum credit: 50% of monthly fee for the affected month
  • Credits applied to next invoice (not refundable as cash)
  • Customer must request credit application within 30 days of report
  • Exclusions: scheduled maintenance, Customer-caused issues, force majeure

6. Maintenance Windows

6.1 Maintenance Policy

Type Notice Period Maximum Duration Frequency
Standard 72 hours 2 hours As needed
Major 7 days 4 hours Quarterly max
Emergency 1 hour As needed Exceptional

6.2 Preferred Window

  • Day: Sunday
  • Time: 2:00 AM – 6:00 AM Eastern Time
  • Notification: Via email to designated contacts + status page update

6.3 Zero-Downtime Deployments

Most updates are deployed with zero downtime using blue/green deployments, CDN cache invalidation, and backward-compatible database migrations.

7. Data Protection

7.1 Encryption

Layer Method
In Transit TLS 1.2+ on all connections
At Rest (Database) AES-256
At Rest (Files) AES-256 (SSE)
At Rest (Backups) AES-256

7.2 Backup & Recovery

Component Frequency Retention
Database Continuous (WAL + PITR) 7 days
File Storage Per change (versioning) 90 days
Application Code Per commit Indefinite
Infrastructure Per change (IaC state) Versioned

7.3 Geographic Distribution

Component Primary Region Secondary
Application US-East (Virginia) CDN (400+ edge locations)
Database US-East Managed replication
File Storage US-East Cross-region replication

8. Provider Responsibilities

Roster Giving ("Provider") shall:

  1. Maintain service availability per SLA targets
  2. Provide monthly SLA reports
  3. Notify Customer of incidents per response time targets
  4. Conduct quarterly disaster recovery drills
  5. Apply security patches within 72 hours (critical) or 30 days (non-critical)
  6. Maintain SOC 2 Type II compliance evidence
  7. Provide real-time monitoring dashboards
  8. Process service credit requests within 30 days

9. Customer Responsibilities

Customer shall:

  1. Maintain SSO/IdP configuration and availability (if applicable)
  2. Provide designated contact list for notifications
  3. Allow outbound HTTPS (port 443) to Roster endpoints
  4. Whitelist Roster email sender addresses
  5. Report incidents via designated channels
  6. Review and acknowledge monthly SLA reports
  7. Participate in integration testing as scheduled

10. Escalation Process

10.1 Technical Escalation

Level Contact Trigger
L1 Roster Support Initial report
L2 Roster Engineering L1 unresolved in 2 hours
L3 Roster CTO P0 unresolved in 4 hours

10.2 SLA Dispute Resolution

  1. Customer submits dispute via email with specific dates and evidence
  2. Roster reviews within 5 business days
  3. Joint review meeting if disagreement persists
  4. Escalate to executive sponsors if unresolved

Questions

For questions about this SLA or to request a copy tailored to your organization, please contact us:

Roster Giving

Email: support@rostergiving.com

San Francisco, CA