Gil Zilberfeld
CTO, TestinGil
Tel Aviv, Israel
Actions
Gil Zilberfeld has been in software since childhood, writing BASIC programs on his trusty Sinclair ZX81.
With more than 25 years of developing commercial software, he has vast experience in software methodology and practices. From automated testing to exploratory testing, design practices to team collaboration, scrum to kanban, traditional product management to lean startup – he’s done it all.
Gil speaks frequently in international conferences about testing, TDD, clean code, and agile practices. He is the author of "Everyday Unit Testing" and “Everyday Spring Testing”, blogs and posts videos on these topics and in his spare time he shoots zombies, for fun.
Area of Expertise
Topics
Intro to Playwright
Playwright has become one of the main tools for testing web applications. In this packed 1-day intro, we'll learn how we can use it to automate and test web UI.
We'll see how to write, run and debug Playwright tests. We'll go over locators and expectations, the building blocks of Playwright, and also talk about the async aspect of the operations. We'll talk about refactoring the tests using the Page Object Model, and also see how we can use it for API testing.
On top of that, we'll discuss where Playwright fits into the testing work - what types of tests fit Playwright more, and when they can be replaced with simpler tests. We'll also see how to run the tests in CI environment.
Playwright In (Less than) An Hour
Playwright is an exciting new-ish web automation framework. It has a lot of cool features that make it easier to write web tests. And it's really hard to push all of them into an hour.
But let's try.
In this session I'm going to introduce you to Playwright, why you should try it, and what you can expect to get (including some dark-side bits).
We'll go over:
How Playwright works
Flavors of Playwright
Main features
Supporting tools
The dark-side bits
If you're new to web automation, or new to Playwright, this is an excellent stepping stone to understand how it can help you, and what to keep watch for. We'll also take a look at how to keep Playwright tests more maintainable and readable.
The Best Test
Show me a feature and I'll show you ten different ways to test it. Maybe a hundred. Eventually, we'll pick a set of test cases for our automated testing.
We approach automating tests by re-using, relying, re-purposing others we have. This seems like the faster way to go. But it may not be the best way.
Tests can be flaky. They can take time to run and debug. They can be short and accurate. Tell us what's wrong on failure, but maybe not give us the confidence we want.
We want the tests that help us. So we need a way to evaluate them. Then we can decide what are the best tests to write for the job.
In this session, I'm going to talk about different test types and techniques, what they offer, and what's their cost. We need to understand what we're testing, what the result of the tests mean, and if there are better alternatives to how we thought to automate.
Clean Those Web Tests
I like clean code. It shows that people care about their teammates, the ones who will need to step in and understand what is going on.
And when I see clean tests - it's even better. With readable, clean tests, it becomes even easier to maintain the code.
So what makes tests clean? We'll talk about the patterns, and this time, we'll tackle web tests. How can we make them easier to understand, write and maintain? Can it be done by AI?
In this session, we'll talk about anti-patterns in tests, how they get that ugly, and how to clean them up. We'll see how clean code principles apply directly to web tests. From removing duplication, good naming, using proper constants, and then move to more complex issues of abstraction, builders and factories, fixtures and test organization, and of course, we'll see how the almighty Page Object model make things easier to maintain. Sometimes (I'll explain).
There's a reason most tests look the same. We copy them, but don't get back to refactor them. Let's remove the cobwebs, and make these web tests clean.
How To TDD A Web Component
Web components seem simple. Just strap a couple of components and together, design them to look nice, and you're ready to put them on a page.
But really, they can get very complex quickly. Complexity breeds bugs, and wastes our time. We'd like our component to work on the first time, without all those bugs coming at us.
TDD to the rescue!
Test Driven Development is something that we usually apply to back-end logic. But it doesn't have to be. You can use TDD principles, along with the testing tools, of course, to build web components - keeping them simple and maintainable.
Oh and one more thing - working. When they leave you're hands they are working, and if you need to make changes, you've got the tests to protect you.
Web component don't seem like good candidates fot using TDD. You'll be surprised how well it works for them.
API Test Planning Live!
How do you come up with cases for your APIs? Is it enough to check they return the right status? No.
APIs are complex, so even a couple cause us to overwhelmed by the options. But options are good. We want the ideas, so we can prioritize based on our needs. We just need to understand our system and come up wit proper ones.
History has proven that the best way to come up with ideas is collaboration. So that's what we'll do.
This is an interactive session on API Test Planning. Given only two APIs (and a semi-sane moderator), we'll come up with creative ways to test them.
Sounds easy? APIs are complex. And in this session, we'll see how much, and how to think of different aspects of APIs when testing.
Refactoring Without a Net
Code sucks. Well, not all code. If your code is clean, it doesn't. But for that to happen, you need to refactor all the time. But here's the catch: Refactoring without tests is risky. Without that safety net, you're not going to try.
So, it's off to writing tests we go.
Unless, of course, there are other ways. Do you have some, Gil?
While there is no real replacement for tests, we can take shortcuts.
In this session, I will discuss refactoring, including examples of how to do it when you don't have tests but still don't want to bring the house down.
I'm going to talk about how to tackle different scenarios, understand the risks, and how to refactor code that doesn't have actual tests. I will also show refactoring patterns using the lovely ApprovalTests open-source tool.
Refactoring may be risky, but it's not a yes/no option. You can still improve the code and make changes easier with these tips.
APIs - Refactoring to patterns
Microservices allow for quick development and delivery. But if the code is, let's say, of the legacy persuasion, the development quickly slows down.
Refactoring to rescue.
And this time, it's not just about well-factored, readable code (although we really like it). We'll use common architectural and design patterns that make the code modular, extensible and testable.
In this session, I'll take some (very ugly) microservice API code and refactor it. We'll make sure the domain logic is separated from the infrastructure code, and that the domain logic is unit testable. We'll discuss and use common design patterns (factories, repositories, etc.), and explain how using them helps keep the architecture flexible and maintainable.
Once the code is factored, we'll see what additional tests we can write - and where they help us.
Microservices code should be modular, cohesive and testable.
If it's not - refactoring according to architecture and design patterns is the best way to get there.
How to Fix A Bug (Properly)
In this session, we'll take a holistic look at fixing bugs. We'll start with where bugs come from, and where they are likely to appear (and invite their friends). That affects our planning efforts, and architectural and design decisions.
Then it's off to the hunt. We'll talk about reproducing the bug, and defining what we want to actually fix.
We'll then discuss prevention methods of writing characterization tests, so we don't break working code. We'll then, discuss how to pick the right test for reproducing the problem, using the Saff Squeeze technique. Once we have a failing test, we'll do the right thing (TDD), and finally fix the damn bug. Then, some refactoring, preparing for the future, and possibly writing a bit more tests so we can sleep better at night.
Unfortunately bugs aren't going anywhere soon (I'll tell you why), but we have a couple of tricks up our sleeve to make the swarms smaller.
How To Be A (Spring) Mock Star
Everybody does mocking, right? But who's doing mocking well?
Well, I'm here to help you be a mock star.
In this session, I'm going deep into mocking in Spring testing, what we want to achieve from it, and how to do it effectively.
I'm going to start from unit testing bean-dependent components, and explain Spring's bean life-cycle and how it applies to mocks. I'll discuss testability aspects of using Spring-based architecture, so we can mock dependencies, and finish up with mocking API calls with, and without a real database.
Tools, like Mockito makes mocking easy, but they only tell you how to use them in basic scenarios. If you want to use them effectively with Spring, you're invited to dive in with us.
We will. We will mock you.
10 Testing Tips on Spring & Spring Boot
So you've been developing Java and Spring for a while, but now it's time to test like the big boys.
We'll be talking about how to use Spring's little know capabilities to help test your code. But that's not all.
Tools are cool, but we need to use them wisely. We'll talk about how to think about testing APIs, services and units, and how to use Spring's features to our advantage.
I'm also going to discuss what Spring features not to use and why.
Spring and Spring Boot are so prevalent, and if you're serious about writing Spring applications that works, you'd want to join in.
The Quality Dashboard
You’ve got thousands of automated tests running, multiple test and coverage reports and logs – but you can’t see the forest from the trees. The problem is you don’t know: Is it safe to release? With refined, specific metrics, you can define reports (or dashboard) that tell you the real quality of the product. You can then decide what to do about it.
This is a case-study of building a quality dashboard with metrics and reports that matter for an application with hundreds of APIs, and multiple front-ends. Some features were better covered than others, but what that coverage meant was vague. The dashboard was built, collecting information from multiple sources – test reports and coverage reports from Jenkins, custom logs that were farmed for information, SonarQube and more. We then added some “brains” to show the analyzed metrics, in terms of covered and uncovered test cases, test quality and more. We then presented a confidence level calculated from the metrics. The effort was done by developers, quality advisors, dev-ops people and others. This session is about this project.
The dashboard helps managers see what features are ready, where the gaps are, and gave back feedback to the developers how well their tests are working for them. With this session you may be inspired to build a quality reports that tell you how well your team is doing.
How to TDD a REST API
If you're working with a framework for writing APIs (and who doesn't), writing tests up front, almost guarantees the tests will be ugly. And the code might not be that great, either.
TDD is great for unit tests, but if you take it up a level, for REST APIs, or an end-to-end behavior, TDD may not look like the tool as advertised.
Unless...
TDD can work for building REST APIs. And I'm not talking about just writing a test before the code, but the entire TDD experience, with tests passing every few minutes.
But unlike "regular" TDD, it doesn't just work. We need to prepare and plan a bit before we can harness of the power of TDD in our real-world applications.
In this session, I’m going to go step by step of building APIs from scratch, including the thinking of what to do next. I’ll be using small steps, and show you where I backtracked and made changes. At the end we’ll have working APIs, with tests that prove that they work.
TDD is not just for unit tests anymore.
Dirty Tests And How To Clean Them
We write tests and code for other people. Tests are code too, and both should be clean.
As a clean code fanatic, I see it as a personal mission to go around preaching how powerful clean code is. But unfortunately,
it seems that test code is not considered "real code", and therefore is not considered "dirty".
In this session, we'll talk about concrete examples of anti-patterns in tests, and how to clean them up.
We'll see how clean code principles apply directly to tests. And that's true for all tests - from unit to end-to-end, and regardless of who writes them - developers and testers.
We'll see those in action.
"Clean code looks like it was written by someone who cares.", said Michael Feathers. Test code may even be more important to write cleanly for that reason.
10 Things They Didn't Tell You About TDD
TDD (Test Driven Development) is a well-known practice, yet hardly implemented methodology of coding.
In the wild, you will barely see it fully implemented in organizations. Why is that? Is it because it is hard? Does it work only in special cases?
I'm here to tell you the things that so-called “introduction to TDD” books and articles don't. It is the little things, like how TDD can be applied in the real world, with real code. How TDD principles apply, regardless of what your coding language is. And even, how it changes you as a programmer.
That's right.
TDD changed my life, and it can help improve yours too. But to get on the road, you need to hear about the secrets of TDD. And I’m ready to tell you, from my experience and perspective.
This talk is for developers, obviously, but also for testers who want to promote better quality in their teams, by using developer-speak.
Unit Wars – JUnit vs TestNG
It's the test framework battle you didn't even know you needed. But it's here anyway.
Which test framework is right for you? Sure, TestNG was the cool cousin of nerdy JUnit 4. But then JUnit 4 called their big brother for help. How do JUnit 5 and TestNG match up?
Much more important for us: Which one helps bring out the best in your tests, making them more readable and maintainable? How do they help with TDD? And is either one worth the commitment?
We'll compare features, and see how they impact writing and reading tests. Tools are there to help us, but we'll be maintaining those tests for a while.
So battle-stations everyone! We're going to war!
How to TDD in legacy code
"TDD is great, but it won't work on our legacy code".
I hear that a lot. That's why people don't even give TDD a try. Their code is killing their hope.
TDD's basic examples are, well, basic, and have no relationship to real-world code. But it can work on legacy code, and everyone's got that. You just need to remember a few techniques, stick to the principles, and you can start doing TDD in your application code tomorrow.
In this session I'll show how to do it, the techniques and principles involved. And I'll show how to add TDD code inside an ugly application.
No more excuses then. It's possible to do TDD right there in your own legacy code. Let's do it.
How To Design Effective API Tests
Even if you're doing a whole lot of planning for your tests, you'll probably have more than enough tests that are flaky, take long to run, and hard to debug when they fail.
We're putting so much effort into planning, running and debugging our tests. But if we build tests that run this way, they may well be a big waste of time.
The fact is, we're poor at designing effective tests.
The good news: It doesn't have to be this way.
In this session I'm going to discuss the activity most of us miss in our test plans. We'll discuss different aspects of test design, based on what we want to learn from our tests. When they are designed correctly, our tests become effective. We can trust them when they pass, and undertand exactly what they tell us when they fail.
I'll also talk about how aspects of testability can give us more design options and how to plan for that. I'll discuss different options of testing the same APIs, and the trade-offs they carry. Finally, we'll discuss how our test design complements our API test plan.
Test design is a lost art. With a bit of understanding, and goal definition, we can design our tests to work for us, rather than the other way around.
How To Create an API Test Plan
Testing APIs seems simple. You can check them off, one after another.
But is that really testing the product? Can you say that it actually works, and if it works well?
You need an actual test plan, not a check list.
In this session, I'll discuss what test planning and design are, and how they apply to APIs. We'll go over SFDIPOT, a test strategy heuristic that helps thinking about different aspects of quality, and derive test cases from there.
And of course we'll see our plan in action - testing APIs with Postman, understanding the results and what we can learn on the way.
Building a test plan is one of the most valuable skills of testers. It's also where creativity comes into play, and why people really enjoy the process of building it.
APIs love it when a plan comes together. Let's create one.
10 Expert Postman Testing Tips
Postman is an excellent tool for calling REST APIs, and it's taken over an automation tool for the testing APIs.
If you're already familiar with Postman's features, it's time to take it up a notch. In this session, I'm going to talk about how to fit the tool to your purpose. Calling APIs is the easy part, but you want robust, easy to maintain automated tests.
And I'm going to cover how to go from that need to better usage of the tool. How it fits within testing methodology and process, and what's code got to do with it.
10 tips, maybe more. Probably more.
The Big Refactoring
Everybody wants clean code. Everybody waits for the next project to start working on it.
But how often does that happen? There's so much legacy code out there, that for a lot of people, that clean code day may never come.
Or, we can just start.
Clean code is something we can introduce gradually and incrementally. We do small refactoring all the time (rename this, extract that). The problem lies in those "big ones". They are risky and long.
Are we making a difference? Are we making the whole codebase better, or are we creating shiny clean nuggets in the ocean of mud?
In this session, I'll go over refactoring scenarios, how to do them safely, and how to measure our progress. I'll show how to use ApprovalTests and SonarQubem, and show how to think about the process and track it.
Going from legacy to clean code seems like a big jump. But with the right mindset and metrics, you can improve your code, and make that big-picture difference.
Testability - The Good, The Bad and The Untestable
"This code is not testable!" - I've heard that a few times in my life. But is it really untestable? And what makes it so?
What is testability anyway?
In this session, I'm going to explore testability of code and systems. What we perceive as testable or untestable - and how that perception impacts the quality of the product.
We'll look at examples from code, to APIs to system level, and see how they impact testability. And finally, what we can do to improve it.
Spoiler: Nothing is really untestable. It's a matter of cost, effort and motivation. But we need a few more things.
Learn more about testability and how to improve it.
The Jedi's API Testing Handbook
It is known that during the republic days, the Jedi Council would test the padawans for exquisit testing skills. They put those in a book called Bhast Praktissses. We found it.
The API empire is threatening to take over the galaxy.
The rebel testers are looking to bring it down. But can they find the weak spots?
When testing APIs, we need to take into account a whole bunch of things - from the code itself, our planning strategy, what tools we can use and how to report the results. That's a lot. Yoda we need.
In this session, we'll walk through the more important principles and practices, that make API testing effective. from dealing with the complexity to focusing on what we want the tests to tell us. I'll talk (and Yoda will help) about using accurate asserts for the specific cases we've decided to automate, and how to make sure the tests help us understand the system's quality. We'll also talk about the life cycle of tests, and even when to get rid of them.
The Bhast Praktissses book is long. There's enough there for a couple of sequels. At least we got some new hope.
The Foundations of Test Automation
Automation does not exist in vacuum. It doesn't matter if you're a master programmer or a junior automation tester, automation is just a part of a whole world of development that culminates in a product release. This world has a language.
And I'm here to introduce you to the dictionary.
In this session, I'll talk about how code becomes working code in production as part of the software development and testing process in teams and organizations. I'll discuss how automation helps with that, and go through the modern terms of continuous integration and deployment, team development, integration, different types of tests, environment management, and more. Ok, also git and docker.
Everything that you need in order to understand how your new shiny test suite fits in.
But even more so. how everything, including your shiny test suite helps moving toward the goal of iteratively working product.
You may have heard the terms from developers, or dev-ops, or senior testers, and you don't see how these relate to you. It's time to learn.
API Exploratory Testing with Postman
Postman is an API invocation tool.
When we're thinking about APIs testing, we'll use it for operating and checking that the APIs work correctly. But not today. Today, I'll show you how to use Postman to do exploration testing of APIs.
What is Exploratory testing anyway? How is it different than "regular testing"? And how does that apply to APIs? And is Postman up to the task?
I'll explain the theory behind exploratory tests, and how because APIs are so complex, they are a prime candidate for that. We need to explore how they behave before releasing them into the wild.
And don't forget some Postman tips for that "regular testing" bits. Postman is a powerful tool, and it makes sense to know it a bit better.
It's exploration time!
Frankenstein's Tests
The automated tests that we write contain many languages. The programming language, of course, but also the language of the frameworks we use, and the domain language.
For example, in web tests, we use terms like "TextArea" and "Div". While these are the test mechanics, they do not convey the meaning of the test. When we use terms from the domain language, like "attendee" and "registration", we understand them better and can modify them without errors. The problem is we end up using both languages, sometimes more, in the same tests.
The mash-up of languages creates Frankenstein's tests. This is bad. The tests are not readable, open to interpretation, and are hard to understand and maintain. We need to find the original purpose of the test behind the tools' language. In the age of test code generation, the problem gets worse.
There are many good practices about refactoring code, modularizing it, using known patterns. This talk is not about them. It is about using the right language in the tests, and hiding the languages of the tools we use.
I'll show the impact of using different combinations of domain and low-level languages, in both API and web tests, and what are the costs involved on keeping the tests in franken-mode. I'll discuss how to improve on the situation, and what traps to avoid, in order to minimize that work in the first place.
The monster is coming. We need to be on the lookout and fix the problem.
Takeaways:
1. We write tests using common frameworks, sometimes generate them completely. We mix & match low-level and domain languages, and our tests end up looking like Frankenstein’s monster.
2. The language mix causes readability and maintainability issues. They impact the cost and risk of understanding, debugging and modifying the tests.
3. With proper names, abstraction principles and code organization methods, we can lower the maintenance pain.
10 Tips for Better Playwright tests
Everybody loves lists. Everybody loves tips.
And everybody who writes web tests should try Playwright.
Playwright has a lot of things I like, because of the thinking behind it - Playwright is designed for creating better web tests. Isolation of tests, concise and readability.
Ok, the maintainability is a bit questionable. But we can work around that.
That's what we've got this session for. Tips on ways to use Playwright, to not just create tests. Tests that will serve you once they work, and as your code moves on forward.
If you're aching whenever you need to get back to your web tests. If you're tests break for no apparent reason. Or even if you like your tests to read like a user workflow, not an HTML parser - this webinar is for you.
WeAreDevelopers Live 2022 Sessionize Event
JCON 2022 ONLINE (virtual) Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top