My Ruby Portfolio
Development of Online Charging System (OCS) for virtual mobile operator serving the Caribbean region. This project provided extensive experience in telecommunications infrastructure, high-load systems architecture, and integration with major mobile operators. The system handled real-time billing and policy management for mobile subscribers across multiple carriers.
Key achievements and responsibilities:
- Implemented high-performance Diameter protocol server for real-time charging and policy control
- Designed and deployed cloud infrastructure on GCP using Terraform
- Built distributed telecom system using Ruby on Rails and lightweight Ruby microservices
- Integrated with major Caribbean mobile operators’ infrastructure (all using Juniper hardware)
- Developed automated SFTP data synchronization service for customer data exchange
- Implemented real-time CDR (Call Detail Records) processing and billing pipeline
- Designed and implemented fault-tolerant database cluster using PostgreSQL
- Developed API gateway for third-party service integrations and partner access
- Built comprehensive reporting and analytics dashboard for business intelligence
One of my favorite jobs: networking, protocols (I heard nothing about Diameter protocol till then), high-load, focused services, lots of data. And yes, all that with Ruby under the hood. Nowadays, of course, I’d use Rust for all that, but Ruby did the job great.
VisitDevelopment and DevOps work on an IoT platform managing over 100,000 connected devices. The project involved modernizing a legacy Rails application, implementing telecom integrations for eSIM management, and building a scalable infrastructure using Hashicorp stack (Nomad/Consul) on AWS. Key achievements included setting up comprehensive monitoring with Loki/Prometheus/Grafana and designing a reliable PostgreSQL cluster on FreeBSD with ZFS.
Programming
- fix legacy RoR codebase (+upgrade services to RoR6)
- telecom ruby adapter for Telstra/Vodafone/Onomondo APIs (SMS alerts, eSIMs management)
- build microservices microframework
- implemented QR scanner for web application
DevOps
- design and implement servers cluster for IoT high-load (over 100k devices)
- build Hashicorp’s Nomad (+Consul) lightweight cluster
- add telemetry (Loki+Prometheus+Graphana)
- AWS: Network LoadBalancer+Run templates and lots of other stuff (managed by Terraform)
- Packer for AMI pre-backing
- FreeBSD+ZFS for PG cluster and control pane
- setup CI/CD pipeline
- dockerize Ruby and C services
One of my favourite projects! Sadly, the client disappeared right in the middle of the service’s frontend implementation, so the screenshots represent the work in progress.
But anyway here is the story.
David needed the car engines database to let the user select he want to tune. After research he found and bought the required DB. But because that DB was just a scraped raw data, there were a lot of corrupted data and of course it wasn’t logically structured. So the primary goal was to model the data and create a complex CSV data normalizer
After that done, the next goal was to create an API. Here I decided to use my favourites: Rack
wrapped in tiny hanami-api
, warden
for token authentication and dry-rb
gems collection. That way we’ve got a beautiful and testable code.
Next step was to create frontend part. And that was the first time I choose Elm as a frontend language, having some awesome experience with it in my learning sandboxes. Everything went smooth until in the middle of frontend work my client stopped responding :(
Anyway, I’d love to use Elm again someday, that was a joy to work with it!
Actually this project had my dream stack: parsers/normalizers, data modelling, APIs, lightweight stack and Elm for frontend. I’d only add TailwindCSS in that list for completeness.
VisitLong-term API project, where I was a head developer of backend side of two applications:
- one DJ’s desktop app which DJ use to notify the crowd the track he’s playing, respond to music requests, upload his music collection metadata people able to search in, etc.
- and the second app is a mobile app for party guests, where people can order, vote, like the currently playing track, send tips to DJ, search the DJ’s music collection etc.
Initially the app were built on Ruby on Rails, but soon we decided to replace it with Grape, just because we were not need Rails beast for API, that migration increased performance a lot. As a free bonus we’ve got a swagger docs for API for free. Hanami (formerly Lotusrb) wasn’t mature enough yet, but today I’d definitely use it instead (or even clean Rack app), additionally replacing ActiveRecord which I consider as an antipattern for everything but MVP.
Interesting design decisions/stack used:
- Firebase for authentication
- Google Cloud Messaging to interact with guest’s mobile application (broadcasts, direct messages)
- heavy use of bz2 streams to transfer huge DJ’s music collection metadata
- stream compressed metadata in NDJSON format
- PG per-DJ partitioning (at those times there wasn’t native PG solution for that out of the box)
- multiple external musical APIs integration (cover images look up, search song my fingerprint etc). At some point each background track normalization process called 5 different external APIs
- of course most core features where covered by tests (unit/feature al kind of, depending on what was the goal of tests)
- custom Firebase authentication
warden
strategies implemented for both roles: DJ and Guest - few PG triggers implemented on PL/pgSQL to ease work with millions of music tracks data records
- Braintree payments integration
- using Que for background tasks (client want no additional elements like Redis in stack)
- simple S3 integration
This was a very interesting experience, where I had to use all my knowledge about performance optimization and big project (with lots of data) leading development
VisitBig DevOps job with a lot of complex tasks, pursuing low cost for the client. On of the cool things I choose is to use Nomad cluster (because we didn’t need bloated k8s here), and Nomad+Vault+Consul trio did the work perfectly.
- network design, stack parts selection
- Terraform for setting up AWS for each environment (staging/production), including ELB, CloudFormation, CloudFront etc
- custom AMI’s used to provision machines created with Packer for each role
- Gitlab’s CI used to build and push app’s docker images to Nomad’s cluster
- two (scalable) instances of HAProxy as a reverse proxy working in pair with Consul
- tens of AWS smaller building block involved, TLS certs, dns, etc
- lots of shell scripting and automation
The project consists of two parts: main grape-based API backend and web crawler, gathering vehicle sell postings from the web. API includes tricky communication layer based on Twilio. My noticeable goals were:
- migrate both apps from Resque to Sidekiq (jobs rework, queues tuning)
- add Sidekiq monitoring and logging extensions
- refactor/add MMS handling for Twilio
- leasing and freeing Twilio phone numbers
- forwarding calls/steganography (shown on screenshot)
- Amazon EC2 instances managing and tuning
- rewrite and make testable internal Slack messenger (shown on screenshot)
- integration with NewRelic, Rollbar, Papertrail and Dead Man’s Snitch 3rd-party services
- plenty of small refactorings (code had low quality)
And of course everything was tested with Rspec.
NOTE: my goals were related to backend only
VisitAngularJs + Ruby on Rails project with integration with Apple Passbook API and Apple push service. This project was done solely by myself for BestFit mobile company.
Features implemented:
- WYSIWYG pass designer with near pixel-perfect result with support of all five pass types
- several versions of the same pass supported
- ability to schedule publishing any version to registered devices using APB API (+push service)
- generating and signing passes
- simple and robust hand-crafted image uploading service (instead of Carrierwave or Paperclip). Creating it allowed decoupling image processing and make the whole pass creation process straight and neat.
- a whole bunch of nice other features
The code was heavily tested, and it was relatively easy, thanks to nice gems like
light-service
(I love Command pattern :) and
virtus
Heavy refactoring/cleanup/drying of legacy code and new features implementation. There was no unit tests at all, and became more than 500 after about three months as I’ve joined project.
Technologies used:
- common gems set (about one hundred gems; rspec, cucumber, poltergeist, rails 3.2, capistrano, mysql etc.)
devise
- with heavy customization to satisfy business needsactive_merchant
- billing (with authorize.net binding/testing)activeadmin
- with decorated modelswicked_pdf
- for health history pdf generationcocoon
- for nested forms (yep, it was long time ago :)jenkins
- CI with github PR triggersmercury
- editorerrbit
redmine
Note on frontend: when I joined this project, frontend (including JS) was almost completed, I’ve done only refactoring of CSS (SCSS+Compass) and JS (CoffeeScript).
VisitAlmost completely rewritten backend (Ruby on Rails based) ugly source code, added completely missed integration/unit specs, added Mercury Editor for admins, no frontend work applied (client was waiting for new design), removed, or taken from gems, 90% of code (JS mostly).
Results of refactoring:
3,626,024 bytes -> 322,949 bytes
536 files changed
5562 insertions(+)
98880 deletions(-)
Less code - less trouble :)
VisitMy first commercial ruby job. Regular complexity site for Russian logistic company, made solely by myself from the ground.
Tools used:
- Ruby on Rails 3.2
- MySQL
- simple_form
- strong_parameters (while using Ruby on Rails 3.2)
- rubytree
- active_admin
- compass-rails
- bootstrap-sass
- rspec
- capybara
- factory_girl_rails
- livereload
- capistrano
- passenger
This website was created using Hugo 0.139.2, and Google Material Design Lite