Stabilize and scale the emergency button system.
Emergency Panic Button System Migration and Cloud Cost Optimization for ROAR
ROAR develops SOS buttons for employees in hospitals and hotels worldwide. They needed a tech partner to help them cut cloud costs, improve device connectivity, and speed up releases.
Cloud expense reduction
Device connectivity improvement
Faster deployment time
“ROAR came to us to migrate their gateways to the new connectivity system. But more broadly, they needed a partner to help them reduce costs and grow.”
– Mykhailo Maidan, CTO at Yalantis
Need to build an AIoT platform from scratch? Let’s make it happen.
Fulfil your success story with us. Share details about your project and book a call with us to discuss your goals.
From medical devices to industrial automation — we deliver complete enterprise solutions with regulatory compliance built-in. Everything under one roof.
Our offices
Poland
123 al. Jerozolimskie, Warsaw, 00-001
Ukraine
5 Dmytra Yavornytskoho Avenue, Dnipro, 49005
Cyprus
8 Athinon Street, Larnaca, 6023
Estonia
12 Parda, Tallinn, 10151
FAQ
-
How did you handle the firmware platform migration without risking emergency call button app availability?
Our team rolled updates in waves, tested multiple emergency scenarios, and auto-validated update success to catch failures immediately. This approach kept emergency panic button app reliable while delivering a large-scale OTA upgrade.
-
What exactly drove the 60% cloud cost reduction?
Our team re-architected the AWS stack for elastic scaling, optimized data storage with tiered approaches, reduced redundant logging/traffic, and added autoscaling rules so capacity matched real load. The end-result was lower spending without sacrificing performance or flexibility across resources, options, and budget planning.
-
What scale can this architecture support, and what changed in reliability?
The platform processes data from ~80,000 devices (status, alerts, logs) and, post-work, supports more devices at lower cost. Connectivity improved substantially (reported as moving from 20% to 80% after migration and hardening), with better management of resources and features.
-
How did you achieve the 8X faster releases in practice?
We introduced isolated environments, CI/CD automation, stronger documentation and testing, and a cleaner release flow, cutting deployment time from ~2 hours to ~15 minutes.
-
If we engaged you, what team and engagement model should we expect after go-live?
The core team included a PM, CTO, backend engineer, and DevOps across IoT consulting and product development, with ongoing architecture reviews, cost management, and best-practice guidance beyond initial optimization. Engagement remains transparent around budgeting and ongoing customer success.
-
What did you tackle to stabilize the system and de-risk operations?
Days 1–30: restore observability and safety, add monitoring/alerts, introduce canary OTA with instant rollback, and fix the noisiest connectivity paths.
Days 31–60: run a staged migration of ~2,500 gateways from NervesHub 1.3→2.2 in controlled waves with auto-validation of failures to protect emergency availability.
Days 61–90: re-architect AWS for autoscaling and lean storage/traffic, and stand up CI/CD, raising connectivity from ~20%→~80%, trimming cloud spend ~60%, and cutting release time from ~2h to ~15 min (~8× faster).
Let’s Start from call scheduling
- Schedule a call
- We collect your requirements
- We offer a solution
- We succeed together!
Thank you for contacting us.
We are open for partnerships too
Check out our refferal program. Find out all benefits.
