Behind the Scenes of Recurve’s Security Strategy
Security looms large over every startup. A serious data breach can sink a company overnight. And yet, the path to realizing security is uncertain. What does it mean to be “more secure”? And what battles do you choose to fight in a resource-constrained startup environment?
At Recurve, we had to get serious about security early on because we handle sensitive smart meter data from buildings across the United States. Our customers, risk-averse utilities, require vendors to fill out long Security Questionnaires before contracts are signed. To succeed, we needed to smooth out this time-consuming and confusing process.
But just answering Security Questionnaires isn’t enough.
These assessments tend to lag behind the state of the art in security practice and do not fit the cloud-based infrastructure of a startup in 2020 particularly well. So, in addition to passing Security Questionnaires, we sought to meaningfully reduce cybersecurity risk as we became responsible for greater amounts of sensitive data. This meant starting our own internal security program.
This was a tricky problem to approach. While the task of security had outgrown the capacity of the team to simply figure it out as they went, the company was too small to hire a full-time security engineer. The solution was to carve out an explicit Security Lead role, tapping an engineer familiar with the infrastructure (me) to spend every week thinking about how to better secure it.
Long story short, the effort was a success.
Our position toward security with customers flipped; rather than defensively answering Security Questionnaires, we started guiding customers through security strategy. We landed millions of dollars of enterprise contracts over the course of this project, creating a mature security program that became a cornerstone of our pitch to prospective customers. All of this was accomplished without slowing down feature progress, and, in fact, often finding ways to marry security improvements with engineering productivity enhancements.
Here are some of my takeaways from my two years as Recurve’s security lead:
Lean into the Security Questionnaire
The Security Questionnaire is a rite of passage for startups that sell to enterprise customers.
The typical Security Questionnaire consists of hundreds of pages of strangely color-coded Word and Excel files–a sedimentary accumulation of security buzzwords that go as far back as the 80s. These files are interesting primarily for their archaeological value, not for their benefit to security.
There is differing advice on how to approach these things. Ryan McGeehan’s Understanding the Security Questionnaire outlines the broad sorts of mistakes startups can make: frantically trying to implement everything, trying to sneak past requirements with smoke and mirrors, or refusing to engage and calling the customer’s bluff. The problems with these approaches should be obvious, but they’re tempting if you’re looking for a quick fix.
Instead, we leaned into the process. Our approach was to earnestly respond to the intent of each item in a Security Questionnaire. If items appeared misguided (such as requiring biometric key card access to our server rooms), then we would discuss them with the customer. I compiled a master document of answers to common questions, eventually allowing us to efficiently copy-and-paste our way through assessments. This grew into a library of prepared documentation that we could send upfront, often allowing us to head off entire portions of the security assessment process before the customer got worried and inundated us with their own home-grown procedures.
Despite making it work for us, it’s hard to say anything good about the Security Questionnaire process. Latacora, a security consultancy, writes them off as dumb at best and a dangerous waste of resources at worst. If there is a silver lining to the process, it’s that it forces the business and engineering teams to talk through security concepts. This ongoing dialogue between the technical and non-technical sides of the organizations seemed quite valuable in developing a shared understanding of security, which leads to our next point–
Security is a Team Effort
From early on, security at Recurve was a team effort, focused more on growing the organization’s knowledge, awareness, and motivation to improve security than on simply having me implement improvements alone.
There is simply too much surface area to cover as an individual. Further, it is nearly impossible to move forward on security if there isn’t a shared understanding across the team. If your data scientists aren’t convinced it’s important to treat customer data with care, they’ll download it to their machines without a second thought. If your engineers don’t think application vulnerabilities will really be exploited, then they won’t look up the syntax for parameterized SQL queries. If your business team thinks multi-factor identification (MFA) is a form of hazing from the engineering team, they’ll have their credentials stuffed in any one of the dozen SaaS apps you use to track sensitive customer information.
It’s also so much cheaper to address security up front than to try to bolt it on later. An engineer thinking hard about the design of a feature is also going to have a lot of good ideas about the ways something could go wrong. Furthermore, a secure design often isn’t any harder to implement than an insecure design. By building awareness across the engineering team, security can be realized early on in the development process, requiring many fewer expensive changes in code that has calcified around an unsound core.
In some ways, the biggest benefit of having a security lead was simply the increased awareness it created. There was always someone regularly asking questions about security, testing for bugs, and complaining about cryptography. There was always someone on the #security channel on Slack. There was someone to congratulate engineers for going the extra mile on security matters. Security was in the air, and everyone was encouraged to play their part.
Security Engineering is Product Development
It is surprising how much successful security work resembled successful software product development. It makes sense in hindsight: you’re trying to affect behavior of software users, both technical engineers and less technical business team members.
A number of our security initiatives were tool-building projects that had an entire product development lifecycle. I interviewed the eventual users of the system to discover feature needs and to build buy-in. For a number of the tools, I would use some sort of productivity benefit as a trojan horse for improved security practice. Finally, I worked to make sure each tool had a clear story for maintenance, ideally by someone besides me, to build further organization-wide security capacity.
One of the more successful examples of this was building out the team’s Terraform stack. Until then, we were manually configuring Google Cloud accounts, which slowed down the implementation of security changes and left the permission model a mess. For engineers, Infrastructure as Code (IaC) with Terraform was just a fun and powerful tool that removed drudgery from customer onboarding. By framing the purpose of a new tool as a productivity benefit, it was adopted much more quickly than if it had been promoted as only a security improvement.
Assume It’s Vulnerable Until You’ve Tested It
Many of the bugs discovered from this work would not have been found in the course of normal development. Thinking of all the ways to exploit software requires a different frame of mind than simply trying to get something working.
It’s not that typical engineers are negligent about security. Instead, they are often so busy with tight deadlines and complex technical decisions that it’s easy for security concerns to slip by, especially if discovering those concerns requires a kind of lateral thinking. Indeed, sometimes security bugs emerge from the interaction of multiple components in a system, not from any particular mistake. The great benefit of having a dedicated security role is that someone is always in the attacker’s frame of mind.
My best example of this was finding an SSRF vulnerability in the most popular Python Single Sign On (SSO) library. The library in question was `pysaml2`, which had hundreds of Github stars and was recommended in the Okta tutorial for integrating SSO into Python applications. Even better, one of the maintainers happened to be on the Top 5 of the leaderboard for Github’s bug bounty program for his specialty skills in bypassing exactly these sorts of authentication systems. You couldn’t ask for a better security story from a library.
As part of our assessment for this security critical feature, I ran a routine Burp Suite scan. Much to my chagrin, this scan turned up an SSRF vulnerability. After some digging, it turned out to be a genuine vulnerability, not a configuration problem. I reported the issue and the maintainers fixed it within a day:
Finding this bug didn’t require black magic, carefully cultivated over years of hacking in the Pentagon’s basement. It was literally one click away in the most popular application security testing software in the world.
To put it simply: a startup with a codebase of any reasonable complexity will have significant security bugs if they haven’t been actively testing for them.
Good Engineering Practice Parallels Good Security Practice
There is significant overlap between security improvements and what is simply a good engineering practice – Infrastructure as Code (IaC), secrets management, automated deployments, onboarding checklists, centralized logging, robust regression testing, and on and on.
One great example of this was Recurve’s use of the Heroku platform. Rather than roll our own container-based deployment pipeline and application instances, we built our applications on Heroku’s Platform as a Service. This was originally an engineering decision: we chose to focus on developing our core competency rather than invest in building an undifferentiated application platform.
This decision came with numerous security benefits. We didn’t have to manage security updates. We didn’t have to make decisions about authentication for data stores. We didn’t have to monitor load balancers, databases, and application services for abuse or security patches. During vulnerability assessment, I would find that security bugs tended to have extremely minimal impact because Heroku had applied defense in depth throughout their stack, providing a level of platform security that would have been very difficult to achieve by a few engineers internally, especially with the harried pace of startup software development.
Because it has so much overlap with good engineering practice, good security practice doesn’t necessarily reduce the velocity of an organization. Indeed, building with an eye for quality will realize both.
The challenge of security to an organization can take many forms. It can be a looming existential crisis, scary because of its uncertainty. It can be the worst drudergy imaginable; Kafkaesque security questionnaires that paradoxically make the organization less secure. It can be an unquenchable fire pit for cash: fear has no limit and the wrong consultants won’t shy away from stoking it. Or, it can be a rallying point for a team’s engineers, a pursuit that is not only creative and enriching, but also an opportunity to develop genuine competitive advantage for a business.
We’re proud of the work we’ve done on security so far at Recurve, but it’s never finished! If these challenges sound interesting to you, head over to our Jobs page: we’re always hiring.
Marc Paré, Engineer (Security and Privacy)
About the author:
I've been slinging code since discovering TI-Basic in middle school. My interest in security started at a high school programming camp: I social-engineered the key to a friend's Vigenère cipher. These days, I hack primarily with startups, helping them solve scalability and security problems on the way to Series A and beyond. Besides my security responsibilities at Recurve, I am the technical lead for the Energy Data Vault project, a DOE-funded Recurve project to bring differential privacy to the energy sector.