All you need is fast feedback loop, fast feedback loop, fast feedback loop is all you need (JavaCro '25)
icougil
0 views
62 slides
Oct 10, 2025
Slide 1 of 62
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
About This Presentation
Have you ever been on a project where desperation can get the better of you? It was more of an odyssey to get a change working in a real environment... in less than 1 or 2 hours? Or where to do a simple experiment, the flow you must follow until you deploy your changes takes one day... if not more? ...
Have you ever been on a project where desperation can get the better of you? It was more of an odyssey to get a change working in a real environment... in less than 1 or 2 hours? Or where to do a simple experiment, the flow you must follow until you deploy your changes takes one day... if not more? Ah yes, we've all been there, haven't we?
Get ready in this session to understand how and why having the most agile feedback possible is a goal we should pursue individually, as a team goal (and in our company), seeing the many benefits it can bring us and how it can revolutionise our software development process. By minimising the time between code changes and receiving feedback, teams can accelerate bug detection, improve software quality, enhance collaboration ... and even make them happier than before. We’ll explore key components like continuous integration, automated testing, monitoring, highlighting best practices and strategies. Expect also to hear about DORA metrics, running experiments, feature flags, some numbers on costs and money savings, and cases based on real facts.
And at the end, get ready to sing along (emulating a famous band): "Fast feedback loop, fast feedback loop, fast feedback loop is all you need!" 😉
All you need is fast feedback loop, fast feedback loop, 🎶 fast feedback loop is all you need 🎹 October, 2025
Nacho Cougil Principal Software Engineer at Dynatrace TDD & clean code fan Started to write in Java before Y2K Founder of the Barcelona Java Users Group ( BarcelonaJUG ) & co-founder of the Barcelona Developers Conference ( DevBcn ) former Java and JVM Barcelona Conference Java Champion Father , former mountain marathon runner 😅 @ icougil icougil.bsky.social https://mastodon.social/@icougil Who am I?
3
4
- We should isolate our environments to identify easily functionalities that are not working - Let’s use new infrastructure and deploy everything from scratch every single time we push any changes -We have some flaky tests - Nah, just retrigger the build
-Ey, it is a bit complicated to show what we have built in every Sprint . What if we create some simulations with fresh data to demo much better all the functionalities? -Yes, sure! This is feasible and looks reasonable - How can be 100% sure that our service is working as expected with the all-external services? - Yes, let’s “freeze” each service for every deployment, and run all the integration tests across all the services
-Ey, our customers are notifying us that we have some errors in our system. We must do end-to-end tests to verify the functionalities are working as expected! -Yes, of course! We must do it! -Let’s include it in our pipeline! -We are worried about the performance of our system. Shoul d we include some load tests in our CI system? -Sure, why not?
21 https:// xkcd.com /303/
We should strive for an efficient, simple and reliable mechanism for delivering changes
why ?
We are paid to deliver value to our customers Software that is used by customers is working as expected can be changed Reme mber
The agile manifesto (20 01) https://agilemanifesto.org
We must adapt to changes Adaptation = Agility Adapt to changes
Again, the ag ile manifesto https://agilemanifesto.org/
We must deliver {working} things We MUST iterate 🔁 fast to save money 💰 & time ⏳ to make our developers happy 😊 { because we want to deliver more & faster features than our competitors } Ability to iterate
What’s the overall amount of time developers are waiting for local builds or PR’s builds to finish? Let’s do some maths
Netflix reduced a 62-minute test cycle time to just under 5 minutes (in just 1 app) It is about money https://gradle.com/blog/netflix-pursues-soft-devex-goals-with-hard-devprod-metrics-using-test-distribution/
Top 5 reasons for developers to be happy at work: salary (60%) work-life balance (58%) flexibility (52%) and productivity (52%) , and growth opportunities (49%) … but only money? ? https://stackoverflow.blog/2022/03/17/new-data-what-makes-developers-happy-at-work/
Top 3 reasons makes a future employer attractive: Developer experience (53%) Salary transparency (41%) Learn from others (40%) https://stackoverflow.blog/2021/12/07/new-data-what-developers-look-for-in-future-job-opportunities/ … but only money? ?
will make developers more efficient Improving the development process Interesting flow will make developers happier + quality + collaboration + productivity
how ?
Will our users understand this super cool feature, or will they get frustrated because it's so complicated? We must learn by experimenting ( lean experimentation ) hypothesis build an experiment analyse the results act Try things! Run experiments ! !
Feature flags, for the win ! Technique that allows dis|enable of certain features or code paths in a product or service, without modifying the source code. A toggle allows to turn a feature on/off , and test it https://openfeature.dev
Deploy/release twice a month VS Deploy 6-7 times a day Up to a week to fix bugs VS Fix bugs in a day Use case: Paramount https://launchdarkly.com/blog/paramount-improves-developer-productivity-100x/
Minimize end-to-end tests to the minimum Less UI tests, more integration tests Decouple components Adopt contract testing Separate pipelines Remove dependencies and bottlenecks
Faster build & deployments Improve team autonomy Less noise for developers happier Better documentation Helps to improve the communication between teams Some benefits
Move all the testing part earlier in the lifecycle (i.e. move it left 👈 on the project timeline) ”Test early and often” Fail fast! Shift left 👈 !
Early defect detection Better predictability and planning Cost efficiency Improved quality Customer satisfaction Faster time to market Enhanced collaboration Some benefits https://www.sonarsource.com/blog/leveraging-sonarqube-sonarcloud-and-sonarlint-for-effective-shift-left-practices/
Etsy deploys more than 50 times a day Deploy on developer 1 st day Engineers get productive faster The level of cooperation (developers, ops, etc) is higher Features are tested easier by the teams Use case: Etsy https://www.infoq.com/news/2014/03/etsy-deploy-50-times-a-day/ https://www.etsy.com/codeascraft/how-does-etsy-manage-development-and-operations
Use case: Hilti Deployment times have decreased from 3 hours to 15 minutes (12 x faster) Feedback loops have shortened from 6 days to 3 (50%) Number of code checks increased: from 6 times every 3 months to twice a week (400%) Higher quality https://about.gitlab.com/customers/hilti/
R eal-time monitoring of applications and infrastructure can help detect and resolve issues before they become problems . Benefits Increase application stability and uptime Faster issues resolution Reduce operational costs Improve developer and operational productivity Better customer experience Monitoring is also a must
Reduced the Mean Time To Detection (MTTD) from 6 hours to 15 minutes (96% lower) Faster delivery: t eams ship projects in weeks (instead of quarterly) New developers and contractors onboard in 3–4 days instead of 8–12 weeks (20x faster) Use case: Toyota https://www.datadoghq.com/case-studies/toyota/
58% increase in (high-quality) software releases Major service incidents down 94% (over past 5 years) Ability to anticipate and resolve issues before impact Increased operational efficiencies b etter collaboration across teams Use case: Bank of New Zealand https://www.dynatrace.com/customers/bnz/
DORA metrics Metrics can help you measure your team’s performance identify areas for improvement Throughput Change lead time Deployment frequency Stability Change fail percentage Failed deployment recovery time https://dora.dev/
Throughput Change lead time - Time ⌛ it takes for a code commit or change to be successfully deployed to production. Deployment frequency - How often 🔄 application changes are deployed to production. DORA metrics Stability Change fail percentage - Percentage of deployments that cause failures in production, requiring hotfixes 🚨 or rollbacks. Failed deployment recovery time - Time it takes to recover 📈 from a failed deployment. https://dora.dev/
"When a measure becomes a target, it ceases to be a good measure" https://en.wikipedia.org/wiki/Goodhart%27s_law Having one metric to rule them all Making disparate comparisons (ex: mobile app VS mainframe) Having siloed ownership ( devs , devops , etc) Don't compete! (the objective is to improve your team’s performance) Warn ing!
It will be enough?
monitoring C ontinuous integration Feature flags R emove dependencies Shift left Automated testing DORA metrics R un experiments Delete bottlenecks
Summary Practice/Strategy Benefits Run experiments Rapid learning and iterative development Feature flags Help in rolling out any new change for our users Remove dependencies Fostering team autonomy and better communication Delete bottlenecks Faster development & deployment process Shift left Flexibility and speed for fast feedback Automated Testing Reduce testing cost and time Continuous Integration Frequent integration & testing for rapid feedback on code quality Monitoring Visualise and better understanding how our systems behave DORA Metrics Quantifiable indicators for measuring and optimizing our development process
Summary Practice/Strategy Benefits Run experiments Rapid learning and iterative development Feature flags Help in rolling out any new change for our users Remove dependencies Fostering team autonomy and better communication Delete bottlenecks Faster development & deployment process Shift left Flexibility and speed for fast feedback Automated Testing Reduce testing cost and time Continuous Integration Frequent integration & testing for rapid feedback on code quality Monitoring Visualise and better understanding how our systems behave DORA Metrics Quantifiable indicators for measuring and optimizing our development process
58
59
60
Questions? [email protected] https://nacho.cougil.com @ icougil https://mastodon.social/@icougil icougil.bsky.social This presentation Feedback form