Welcome to Clubhouse's "Meet The Software Engineer" series, where we interview software engineers about their background and what they're working on! Varun Vijayaraghavan is a Senior Software Engineer from x.ai, a company that makes "ridiculously efficient AI software [that] solves the hassle of scheduling meetings and appointments."
Early on during my undergrad degree in India, in Electronics and Communications Engineering (primarily circuit design), I took an extra summer course in programming on Analog Device’s Digital Signal Processor. This was the first time I had so much fun writing software. We did low level stuff like matrix multiplication in assembly, but also higher level stuff like speech and image processing. I felt very empowered and felt like I could create a lot of value if I became a software engineer. Also, writing and debugging programs was a lot of fun.
After completing my M.S. at Rutgers University in 2011, I knew that I wanted to join a startup where I could help build a product from the ground up. I joined a small startup called Visual Revenue, which was working on providing real-time predictive analytics for editors at large news companies. As expected, I was forced to learn a lot in a very short amount of time. I dove headfirst into scaling out MongoDB and Redis, building resilient, highly distributed real-time streaming applications and job systems. Funnily, this was also the first time I ever used git! Visual Revenue was acquired by Outbrain, where I spent a few more years scaling out this system.
In 2015, I joined x.ai (also started by the founders of Visual Revenue), where we’re building a virtual assistant to schedule meetings. I worked on the core decision making system here. This was also the first time I was able to build a large production system in a functional programming language (scala), and to see first hand how it helps build well-designed and well-tested complex applications.
The first non-trivial piece of software I worked on was a distributed web crawler and parser at Visual Revenue, which processed about 10k pages per hour. This was written with python and lxml parsing library. It’s functionality was to basically understand the structure of the home page, based on custom tags, so that we could provide immediate positional recommendations and analytics to editors of news websites.
I love building systems with statically typed FP languages - I’ve experienced first hand, how easily the paradigm allows us to build highly complex, maintainable and well-tested applications. I currently use scala, which gives me access to a rich set of libraries, from both scala and java ecosystems.
One of the more exciting developments in recent years is TypeScript. Type system and reasonable concurrency (async/await) to the most popular language in the web? Bring it! It’s a big boost for building much more maintainable systems. And I’m definitely warming up to optional typing - if your small and simple program does not need types, that’s fine!
On the infrastructure side, redis and kafka are some of the most well-designed and most reliable pieces of software I’ve ever used. Redis for fast, in-memory data structures and kafka for distributed streams. They both have very specific sets of use cases, but have always “just worked” to fulfill those cases.
One of the more exciting developments in recent years is TypeScript. Type system and reasonable concurrency (async/await) to the most popular language in the web? Bring it! It’s a big boost for building much more maintainable systems. And I’m definitely warming up to optional typing - if your small and simple program does not need types, that’s fine!Varun Vijayaraghavan, Senior Software Engineer at x.ai
Estimation is absolutely a big pain point and, especially for a startup, it’s very critical to manage this as well as possible, given that time-to-market and quick customer feedback loops are critical to a startups success. Unfortunately, as the software grows bigger (by adding new features and handling many different edge cases), it gets a lot more difficult to estimate the time to ship something, because there will be a lot of interdependencies. Both in the system and the product.
Estimation is hard, and there’s no silver bullet to make it easy. The smaller the system and smaller the scope, the more manageable estimation becomes.
In my experience, the most successful projects are ones where we can carve out an MVP that’s as small as possible, and be shipped quickly to a set of actual users, then prioritize and build additional features and polishes based on feedback. There is still uncertainty in the time spent from start to end but, hopefully, because each increment is small, it’s easier to estimate the individual components and make good but small prioritization decisions based on the estimates. This does not solve the estimation problem, but it gives you bunch of knobs and levers to tweak while working on a project.
For the risky (in terms of time to complete) components of a project, it’s invaluable to spend time gathering information about the system and the best way to implement the feature in the system. This is typically hours, but could even be days of work discussing with people who are “experts” in the system, writing out scripts to try out different apis, or passing in fake values to different parts of the system to see the output in a specific case (i.e. a unit test!). Many times, spending time on this often reduces the estimate of the task from “a few weeks” to “a few days”, which is often the difference between shipping the feature and not shipping the feature.
My personal philosophy is that tech debt is only expensive to the extent that you make changes in it. If it’s a system that isn't really changed that often, and is not critical for your users, and the tech debt that can be “worked around”, it's probably not worth taking it on, regardless of how bad the design or implementation feels.
I’ve also often seen (and often implemented!) systems, which were designed very carefully for a certain set of presumed use cases, full on with parametric types or multiple microservices, or sometimes even a semi-framework. But, immediately after shipping it and watching users interact with it and get their feature requests, it became readily apparent that the design was not the right one, or that we spent way too much time on a part of the system that was not important. Oftentimes what ends up happening is that you end up shoehorning new feature requests, and most of the actual customer value, into the existing design.
My lesson for new systems: build the simplest version possible given your current tech, get user feedback and implement a few new features or bug fixes. After you really understand what parts of the system actually change often and are really critical for user value, spend some time redesigning the important components of it, where necessary.
My lesson for new systems: build the simplest version possible given your current tech, get user feedback and implement a few new features or bug fixes. After you really understand what parts of the system actually change often and are really critical for user value, spend some time redesigning the important components of it, where necessary.Varun Vijayaraghavan, Senior Software Engineer at x.ai
At x.ai, we’re building a virtual assistant (Amy & Andrew) that schedules meetings for you. Amy/Andrew send emails to your guests to get their availability, and sends out an invite once all your guests give their availability.
A few things.
On the backend, the tech is primarily nodejs and scala. The backend services run on mesos+docker on EC2 instances in AWS. We use MongoDB and S3 for our storage, and SQS for our queueing systems. The machine learning models and associated work is in python, and we use angular for the frontend.
We set up mostly autonomous teams that focus on specific areas of the business or the product over a 6-week cycle. An area might be a big new feature, or focusing on specific known problem areas (e.g., multi-participant meetings) and building a lot of small improvements
Each of the teams typically do weekly sprint planning, daily stand-ups and occasional retros.
This is a pretty effective way of helping teams and people focus and deliver at high speed. But with the focus on delivering big in specific areas, small bugs from previous cycles or general user requests for smaller things sometimes fall through the cracks.
System complexity, and changing/new requirements, often means that the code that was written a year ago does not fully reflect the new system and new data. This makes it difficult to make changes to that piece of code, especially if you’re not the original author of that piece of code. It’s a tough challenge but, as soon as we identify it, we try and find opportunities to refactor it, especially if we think that we’re going to make a lot of changes to that system in the near term.
A related problem is on-boarding of new team members. We encourage new team members to use the product like power users and understand what it does under complicated scenarios. It then becomes a lot easier for them to understand why something was written the way it was, when they go and read the code for a particular component.
Both still unsolved though, otherwise they would not be pain points!
Great question, and this is not easy.
Urgent bugs take priority over all longer term term initiatives, even if it sometimes means that the initiative gets delayed by a bit. However, the smaller bugs and feature requests are harder. While one small bug by itself is not a major issue, many small bugs or feature requests in aggregate make for a poor user experience.
We have tried processes like “Bug Bash Friday” or “On-call focuses on bugs” with some, but limited, success. We have also had teams spend a few 6-week cycles focused on polishing the non-urgent issues.
In my experience, exposing engineers directly to the customers voice is very effective in terms of helping us identifying the pain points in the system, and finding good solutions to their problems and feature requests. We have a few channels on Slack where customer feedback directly gets piped in, and these are available for all to read. [Editor Note: We do this at Clubhouse too!]
In my experience, exposing engineers directly to the customers voice is very effective in terms of helping us identifying the pain points in the system, and finding good solutions to their problems and feature requests. We have a few channels on Slack where customer feedback directly gets piped in, and these are available for all to read.Varun Vijayaraghavan, Senior Software Engineer at x.ai
Maybe how big the system and codebase is, and how large the problem domain is. It has certainly been a big surprise for everyone who joined x.ai. For example, when I started working on the core decision making system, I thought it would be feature-complete in a year, or max 18 months. But here I am, nearly 4 years later, still implementing big and non-incremental changes in the same system.
I follow open source contributors in the scala community, there are a ton of very interesting and useful software they create. I love the tools, and the simple and well-designed libraries (for example: Ammonite, a debugging tool I use almost every day) that Li Haoyi (@li_haoyi on Twitter) has built. I love Miles Sabin (@milessabin on Twitter) work on shapeless. While this is not something I use every day, it’s fundamental to many other scala libraries. It’s really interesting and fun to see scala’s type system stretched to its limits.
Salvatore Sanfilippo (aka @antirez / antirez.com) - he’s the author of redis, one of my favorite pieces of infrastructure software. I follow his blogs closely; I really love the careful attention to detail, and his clarity of thought behind design decisions as well as deciding what to build and what not to build.
Focus on the value that you’re delivering to the users of your software, everything else (including things like good software design) is secondary. When you’re razor-focused on this, over time you will end up getting a lot more clarity on the actually important technical problems or good design patterns for your domain. More often than not, it will be different than what you imagined before you started.
Also, write lots of tests, and learn to use powerful debugging tools in your domain.
Focus on the value that you’re delivering to the users of your software, everything else (including things like good software design) is secondary. When you’re razor-focused on this, over time you will end up getting a lot more clarity on the actually important technical problems or good design patterns for your domain. More often than not, it will be different than what you imagined before you started.Varun Vijayaraghavan, Senior Software Engineer at x.ai