Mar 14, 2023
There’s a reason we use the word “confidence” as Skyramp’s primary dev contribution. Not convenient, or easy, or powerful, or any other hyperbolic marketing gibberish. Just “confidence.” But why?
Because when it comes to API and microservices testing, and banking for that matter, a solution you can’t rely on may be worse than worthless. Deleterious. The point of knowledge often hinges on the accuracy of that knowledge. Let’s take the banking example first.
This is a tough week for us, and many of our brothers and sisters in startupland. SVB was a trusted partner in keeping the cash we fought so hard to obtain. Our lifeblood. We never worried about WHERE we put the money, only about using it wisely and getting more of it. Yeah, we were confident in the WHERE. And the rate at which that trust evaporated is, for most of us, hard to swallow.
Different players in the software development, integration, and delivery process (the whole CI CD Megillah), look for different things from testing cloud native apps. Because they care about different things. From the inner dev loop to the outer dev loop and beyond; from narrow integration tests to broader integration tests; from load tests to stress tests. If the results are either meaningless or inaccurate, they are less than worthless.
A few distinctions: no tests are ever going to catch every bug or error. And the truth is, devs in the inner loop care about some types of errors, but not others (say, environment-specific production errors). Same goes for QA, and all the way down the line to SRE. Part of confidence comes from the expectation that testing solutions answer the questions raised by a particular individual in a particular role. For Skyramp to give devs (and others) confidence when they develop, test, integrate, deploy, and ship code, we have to facilitate asking and answering the discrete questions of different types of users with different responsibilities. Skyramp tools have to be easily customizable, and shareable where appropriate, across a pretty diverse range of user.
But when devs do care about knowing something, asking a question, and testing, they and others expect the results of those tests to accurately answer the question asked. In science we call that high specificity (e.g. few to no false positives) AND high sensitivity (e.g. few to no false negatives). Anyone who had their nose swabbed in the early days of COVID knows the pain of low sensitivity and low specificity, empirically. The other half of dev confidence comes from accuracy — high testing sensitivity and specificity. Doesn’t have to be perfect, but perfection is the asymptotic goal for questions asked and answered.
In an upcoming blog we’ll talk more about what we mean by integration tests, specifically the narrow vs. broad distinction, the interplay with inner and outer dev loops, etc. Until then, chin up to our tech colleagues everywhere. With this latest FDIC news, I’m confident we’ll make it through this.