Rick Clymer
Quality and Reliability Lead, RocketReach
Cleveland, Ohio, United States
Actions
For the past 13 years, Rick has been on his quality journey making stops at OnShift, Kalibrate, Rocket Mortgage. He joined the RocketReach team in the Fall of 2021 and has loved every minute of it. Currently, he is not only working on establishing quality processes and concepts throughout the engineering team, he is also responsible for the monitoring and reliability side as well. He works every day with the teams to ensure the solutions they are building are not only well tested but also able to be monitored so we know when something is off with them. Rick's looks for new opportunities to raise the customer's concerns not just from how something works for the end user but also how it performs as they use it.
Area of Expertise
Topics
Stop Making QA The Last Train Stop Before Production
Throughout my career, I have heard the same excuse over and over again. "We're waiting for QA before we can push this to prod." But why do we keep hearing this? What magic does QA have that no one else has to give this permission to go to production? Spoiler alert, we typical don't have too many magical powers.
This talk is designed to stop the misconception that QA is a train stop. Building a quality approach to your entire SDLC should be your goal and this talk will provide a map of how to make quality involved in every stop along the way rather than waiting until the last stop before the depot. We will discuss how you can bake quality before a single line of code is written, how to get your engineers more involved in testing their solution as they build and how to make that validation stop before production less about a specific QA person or team and more about ensuring you have fed your train the right amount of quality along the way to it's final destination. Following this pattern will allow you to get your work to production quicker with more quality than simply relying on QA to do all of the validation work.
Quality minded folks should be treated as a resource to helping the overall quality of the SDLC, not a single point of failure in your final destination.
Launch Your API Testing Out Of This Galaxy With Postman
Looking to enhance your API test coverage with a single solution across all of your services? Maybe you want to make sure that your services are integrated properly and working with each other in your deployment pipeline. Or maybe you're looking to have your manual testers begin to write automation. All of these can be accomplished using a single solution, Postman.
In this talk, we will discuss how we can use Postman to create a collection to test all aspects of your API's. We will start by getting familiar with creating a collection that you can use over and over for your testing. Once we have our collection established, we will look at how we can write tests to make sure we are getting proper responses, timely responses and how to confirm our responses that are returned from our API. We'll review how we can use the same collection across multiple collections to not have to duplicate our efforts. Finally, we'll look at how we can use Newman to add our Postman collection to our CI/CD pipelines.
By the end of this talk, you will have the knowledge of how to create a collection that you can add to your deployments to build your confidence in your services in a quick and easy fashion. Using the testing methods discussed in this talk will allow you to move your testing further away from a manual state and build more confidence in your API layers.
Ensure Your Users Experience - A Trip Around User Validation Tools
In today’s fast feedback world, getting our product in front of users often is incredibly important. But how do we know our product is ready to go in front of our users. Sure, we have 1,000’s of unit and integration tests on each of our microservices, but what happens when we put them all together. Ensuring our expectations of how our product works when all of the pieces are together is the final piece of the puzzle to give us full confidence we are ready to release and ensure our users are having a consistent experience.
In this workshop, we will discuss the different methods we can use to give us the confidence in our product. The main focus is on the different tools available to automate our validations. We will spend time getting to know three open source tools, Selenium (and some of the solutions built to use with WebDriver like Protractor and Cucumber) , Cypress.io and TestCafe. We’ll discuss best practices for each of the tools as well some ways that we can make our product more testable. We will also look at how these tools can help share knowledge of our how our product works as well as our products codebase. And while using these tools is a great way to know if we are ready to ship it off, we’ll look at some things you can do in production to ensure that you know of issues your customers are having prior to them even calling you.
The hope is after this workshop, you have the knowledge of all of the tools we discuss and the confidence to pick a path forward for you to build your organization’s confidence in your ability to release to production. Using one, some or all of these tools will open up a new world for areas of your product to worry about and give you more confidence that your users are having the experience they desire and you imagine them having.
Don't Just Fix It, Learn From It - The Importance of Incident Management when Something Breaks
Panic messages saying the system is having issues. Your phone buzzing from your alert system sending you text's about the system being down. Intuition kicks in and tells you solve this issue as fast as possible and get back to your day. While you have solved your issue at hand, you're not setting yourself up for future success and preventing doing the same thing next time around.
In this session, we will discuss the importance of not just solving the issue at hand but how to learn and improve your processes. We'll review topics such as documenting as the outcomes as it is occurring, the importance of playbooks, and leading a successful post mortem to make sure this isn't a fix and forget situation. We'll go thru a mock incident to see how we can incorporate each of these and other processes throughout to ensure that we learn from our mistakes to prevent similar scenarios from happening in the future.
While getting your system usable for your end users should be goal number 1, the very next goal is not falling into a similar state in the future. Putting this process in place, you will have the tools in your belt to prevent this from happening again.
Climbing To The Top Of The Mobile Testing Pyramid
Planning to test a mobile application can be quite a confusing time. Real devices? Shrunk down browser? Device hardware? Using the mobile testing pyramid to guide our testing efforts allows us to be more efficient about our testing (and maybe development) efforts. In the end, all of our focus on making sure we get a quality product to our customer’s hands should be our top goal. We can all efficiently achieve that by using the mobile testing pyramid.
My hope is that with the real life examples in this talk of how we have used the pyramid, you will be able to walk away with a better idea of how to be more efficient at the different levels of the mobile testing pyramid.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top