Better, or Just New?
In a world full of hype, how can teams tell if new tech is actually worth it? A reflection on how to weigh evidence, reduce risk, and make more grounded decisions in complex systems.
Software…software always changes
Within the software engineering world1, there is one golden rule: in order to change something, you must propose a better replacement. Better enough to justify the cost (or friction) of change. Otherwise, why would you change it? The industry thrives on novelty, improvement, and — let’s be honest — a fair bit of hype.
The simplicity of the idea above fades as soon as you asks the definition of better, but let's skip that discussion for the time being2 in favor of another one. Even assuming we know what ‘better’ means, you still have a long way to go to ensure the new thing is actually better. Everyone dreams with hard proof; direct evidence based on solid background (e.g. literature), the smoking gun. But Most of the time, all engineers can realistically hope for is good enough evidence, proxy metrics, and reasonable certainty that the proposed replacement is better than what you’re trying to replace. Actually proving things is mostly reserved to academics and researchers, not to mention that what software engineers openly call proof would be “circumstantial evidence” to a lawyer or “sparse observational data” to a physicist (at best). Even when solid evidence comes from universities and research centers, they have to be interpreted and digested to support decision making. And more often teams are unable to come up with strong, contextualized results; real software companies are a lousy laboratory. Strong experiments need stable, controlled environments and strong metrics. That's the opposite description of any place I ever worked (and for many good reasons, most companies do need to be able to adapt fast). Ultimately, most teams do not have anything close to hard evidence to make decisions about the tech stack. Not enough to even justify starting a “trial phase” or “proof of concept” project.
When I see recurring decision-making situations (like whether to adopt new tech), I really wish we had a framework to help us out. The whole processed boiled down to simple steps and straightforward questionnaires to fill and make the decision easier. Then I remember we are dealing with decisions regarding complex systems, which makes it virtually impossible to extract a process or method that worked somewhere else and just replicate on your organization's context. What’s been discussed here is a general overview of the forces at play when these decisions are made. Very far from what people would consider a full formed framework. By making them easier to identify we can infer how to interpret, evaluate and accept their influence based on the underlying incentives and potential returns before making the decision.
The Triangle of Tech Adoption
When software teams need to decide whether to adopt a new piece of tech3, most have no alternative but to rely on an intricate balance between External Signals, Internal Conditions, and Inherent Technical Properties.
External Signals are the information available about the tech from the broader ecosystem — including research teams at universities and large enterprises, colleagues, government bodies, vendors, public benchmarks, and so on. Naturally, each source carries a different level of trust, depending on the quality of the evidence and the interests at play. Some of these are empirical observations, others are trends or informed guesses4, and most require careful interpretation.
Inherent Technical Properties refer to assumptions grounded in fundamental knowledge about the tech — expected outcomes based on their architecture (e.g., monolithic vs. microservices, concurrency), behavior (e.g., how a database scales under load), or constraints (e.g., the limitations of a framework like Flask in certain high-traffic environments). These help assess whether external claims make sense technically, acting as a form of analytical reasoning in the face of uncertainty. Especially on emerging tech, there's a significative gray area between External Signals and Inherent properties but overall they should be the closest thing you can get to facts; claims with strong evidence to support informed assumptions during the decision making process. These usually arise from well controlled (and hopefully well described) environments, which limits the extensiveness of the findings. This forces the decision maker to fully understand the initial conditions of the original experiment to assess whether the same results should be expected on the scenario at hand.
Consequentially, both the strength of external signals and the clarity of inherent properties depend heavily on the Internal Conditions — the characteristics of the organization, team, and industry where the decision is being made. Software engineering practices are deeply context-sensitive, and any externally sourced claim must be filtered through these contextual constraints. Many seemingly solid arguments break down when underlying assumptions don’t hold in a specific setting — or worse, turn out to be irrelevant.
These three forces should be explored and overlapped in a way that, together, they cast a hopefully realistic (and therefore useful) model of the future. If done correctly, the projection should be clear enough to answer whether the new tech should be adopted.
Finding the balance
What makes a framework to address our problem impossible is the fact that all these forces emerge in different intensities and formats for each real case. The only certainty is that there are so many factors at play that no two decisions will have the same input quality or distribution — each one will have its own unique flavor. The initial discussion often starts with a unbalanced mix of forces, which ends up skewing the decision: most information available comes from the vendor itself, or all available benchmarks are very limited. Often the claims regarding a new approach/paradigm are dim, and very few practitioners have spoken about the real benefits collected. Sometimes the flakiness around the available information is so much that the team decides to not pursue the decision (or just simply repel the new thing) because the attached uncertainty. Gathering more information and producing compelling evidence might be expensive, enough to take away the expected benefits.
But sometimes, tuning the composite forces is a matter of casting the right light over them. For instance, if most information about a new database is coming from the vendor’s website, it’s crucial to also look at independent reviews, benchmark tests, and real-world case studies to see past any potential bias and arrive at a more grounded decision. Enough to mitigate outstanding risks to a point where we are comfortable with the prospect. But we just can't realistic expect a near-zero risk situation.
Making the Call with Imperfect Inputs
If the team is waiting for the perfect distribution between External Signals, Internal Conditions, and Inherent Technical Properties, most decisions will never happen. Adjusting the inputs to grasp every aspect of the context behind the decision and build up the strongest case possible is expensive, slow and probably defeats the purpose of the proposed change.
But even without a perfect balance and full visibility, we can leverage the power of continuous improvement. Rather than striving for a perfect solution from the start, teams can iteratively test, learn, and refine their approach. This mindset allows teams to start small with experiments, measure the outcomes, and gradually scale successful experiments while abandoning or adjusting the ones that don’t work. With each cycle, the team’s understanding grows and the technology’s fit within the organization becomes clearer. It's all a matter of keeping the list of unknowns short and the associated risk low.
Which leads to the same conclusion as before: most of the time we will be deciding over an imperfect model — which doesn't mean it can't be useful. Working with imprecise information and limited experiments is part of the job. The key is finding what's measurable (even if indirectly) and useful (even if limited) and validate the results against the expected outcomes (based on Inherent Technical Properties). That should be enough to decide whether the team should try to adopt something new.
Building to Learn
What do you do when the uncertainty regarding whether adopt a new tech is to high? Many unknowns increase the risk and diminish the expected return over the investment, decreasing the chances that the team will actually try the new thing.
Most of the theoretical discussion should take place only to support the decision of whether conducting a trial period for such new tech. Limitations regarding how Internal Conditions influence the results make it impossible to firmly conclude much from past experiences and theory. That means developing a proof of concept or a MVP that will actually be able to show how the candidate tech will perform on the real context. The trial setup includes decisions regarding the scope (e.g., testing just the new API or a full feature), duration (e.g., running the trial for one sprint or three months), and the metrics to evaluate the aspiring tech (e.g., response time, developer adoption rate, or cost savings in cloud services). The experiment must be designed to balance simplicity (to justify the associated risks of not producing any improvements, which is still high at this point) and solid (within possibility) outcomes that can be actually evaluated. Outcomes are more than just quantitive metrics (which are rarely available or useful for a cold, straightforward opinion) but also feedback of the engineers involved.
From Uncertainty to Informed Action
Many teams struggle when deciding whether to adopt new tech, specially replacing something that's been around for a while. Not just frameworks, Cloud resources, programming languages, but anything that influences of the Software Development Lifecycle tends to generate a lot of discussions which end up leading to friction that might diminish the expected gains just by making the adoption more expensive.
By analyzing the available data through the lens of External Signals, Internal Conditions, and Inherent Technical Properties, teams can only understand the forces at play, but also identify gaps on the available information that would make the decision harder and hence, increasing the associated risk. By balancing out the input and planning the simpler experiment capable of showing the expected outcome contextualized by the Internal Conditions of an organization, teams can evaluate the cost and return of adopting new tech. The lower risk path nourishes innovation and continuous improvement, feeding an experimentation cycle that produces internal knowledge and keeps teams up for adopting high value and avoid the hype.
Lead image by Mila Aguiar
It's seems reasonably to me that some of the topics here may apply on other fields, including other forms of engineering and management; after all, many of the challenges regarding measuring and efficiency we discuss here are merely echoes from complex system's inherent properties. I just don't want to assume things from contexts I have know expertise on.
To be honest, I believe that one of the most important traits of the management/leadership body of any company is the ability to detect what needs to be improved and on what order. Or even better, knowing when to not try to fixing something that isn't broken (or at least it's enough for the time being and doesn't need improvement)
Intentionally broad definition of "piece of tech” to also include frameworks, processes, tools, programming languages, etc; anything that would show up on a software architecture diagram or CI/CD pipeline.
Great examples of a reputable educated guess are reports crafted by industry leading teams, like Thought Works tech radar and The State of DevOps report. It's always important to understand the conditions behind the underlying research and experiment. No “reputability” should be enough to justify you blindly accepting the takeaways.


