Wojciech Gruszczyk: Senior Product Manager at co.brick
Strategic inflection points
In his famous book – Only Paranoid Survive – Andrew S. Grove, former CEO of Intel, defined a strategic inflection point as:
The observation Grove made is true for centuries, and latest shifts in technology are only reinforcing its validity. Let’s take a look at few such moments when new inventions changed the world and businesses of their times:
- Gutenberg’s press,
- Steam and – later – gasoline engine,
- the Internet,
- Deep Learning.
Anyone could easily multiply the examples, and find lots of proofs that reinforce the thesis Grove presented: who does not react to such change on time, sooner or later will be outperformed by those who adapt and take advantage of the opportunity the change brings.
Cloud-native: the buzzword of the '10s
During the decade of the '10s, becoming cloud-native was one of the biggest ambitions of the large scale vendors. After a long period of neglecting cloud as the new home for enterprise-class customers, all key players took attempts to move to the cloud. The businesses understood that data privacy is achievable, while costs can be significantly reduced in the new architecture.
Usually that transformation was not successful. In the end, the solution offered as cloud-native was in fact a hosted version of the old on-premise solution. A good summary of that pseudo-transformation in the e-commerce world is available in the article by Kelly Goetsch, where he reflects on the transformation of the largest platforms: Your Legacy Software Vendors are Lying to You.
Of course, aside from the transformation, new businesses grew. They leveraged the potential of the cloud. True multi-tenant systems were written and took over a significant share of the market. With relatively low base price and new functionality delivered quickly – that is the power of the cloud and CI/CD, isn’t it? – those vendors gained reputation and became true competitors for the old, overgrown leaders.
DevOps practices and culture emerged, allowing the first-class citizens of the cloud to deliver new services quicker than ever before. The most successful adopters were – and still are – able to release their software hundreds of thousands times a day, with zero downtime, reliably and automatically. All that may look as if the dreams came true, nevertheless – as usual in such cases – the world has slightly changed, opening new possibilities…
Startups ante portas!
What the enterprise business loved about their enterprise vendors, is the integrations. Complex, yet integrated and more or less reliable solutions were preferred over modern, cloud-native software. This was caused by the fact that building and owning a number of integrations, usually meant delays and growing costs, which were not out-weighed by the gains in functionality – if such existed at all. For that reason many companies decided to buy a golden-cage from one of the big vendors.
Taking the risk of migration to a cloud-native solution, just to be in the modern architectural setup, may not be sufficient to convince anyone to migrate, especially if you need to wait years to get back the money the migration would consume.
What might be a sufficient incentive to take the risk and replace the golden-cage with a cloud solution? In my opinion, giving the customer access to the whole universe of features, created by anyone, anywhere in the world, without a need to hire a big team of in-house specialists, and being able to utilise and replace any vendor with its counterpart in the startup ecosystem should be sufficient. That becomes extremely tempting if you take a look at the number of companies – mostly startups – delivering solutions for various problems.
As an example, in April 2020 martech5000.com published a viral infographic presenting the technology landscape in marketing. From the intended 5000 solutions, they came up with 8000 and the number is growing!
Easier said than done? With the current state of cloud platforms – not necessarily!
How to build a composable solution
Modern solutions are usually built with tens or hundreds (if not more) micro-services, which altogether build a net of interconnected components, implementing specific use-cases of the system. Each micro-service is potentially implementing a well-isolated behaviour that might be abstracted and replaced by a third-party service, or custom implementation. Obviously, building each integration by the vendor of the cloud-system is not possible, while it would mean maintaining lots of integrations, potentially different for each tenant of the system. What would be then the features of the ideal composable solution? In my opinion, such a system should have the following capabilities:
- be configurable per tenant,
- be configurable in runtime,
- each service should be possible to be replaced,
- custom behaviour should be injectable around each service call (alike aspect-oriented-programming),
- all flavours of integrations should be supported:
– direct service calls – both sync and async.
Fortunately modern cloud solutions built on top of technologies like Kubernetes are flexible enough to allow for such design and service mesh seems to be the most promising place to implement the cross-cutting AOP-like concerns.
Service mesh may be used in many ways, ranging from telemetry to security, nevertheless – if we forget about all the details and complexities of technologies like Istio – the mesh provides us with proxy objects around services. This simple observation leads to an analogy with AOP proxies used in technologies like Spring AOP, where any service call can be altered. If the proxy is equipped with a mechanism to utilise runtime-defined, tenant-based configuration to add or replace the behaviour of the vendor-provided service, the whole mesh may be fine-tuned by an implementation partner or customer, directly, without involving the vendor. Let’s take a look at an example:
Obviously, this approach leads to a number of challenges. To name a few examples:
- how to guarantee a contract between the default service and custom integration,
- how to define SLAs, so that poorly performing, third-party code will not break them,
– proper telemetry must be configured,
- how to properly separate tenants,
- how to avoid cycles in service calls misconfigured by the customers.
Assuming that all those challenges can and will be resolved, there is one more aspect to consider – User Experience (UX).
Even if the technical staff is OK, or even happy, just with the APIs, non-technical people and the end-users of the solution will most probably need some UI to perform their daily tasks. Moreover, they will most likely expect to use only few, preferably one, applications to perform their daily tasks.
The variety of vendors leads to a situation where:
- each vendor may have a different UI technology in place,
- some vendors may be fully headless.
Unification is not an option when the scale of thousands potential integrations is discussed. Nevertheless, with the capabilities of SSO, and micro-frontend architecture, with limited effort all the UIs can be brought to a single place. The vendor of a platform allowing to integrate the whole ecosystem in one place should provide, at minimum, the capabilities to:
- access the crucial interfaces in one place:
– Single Sign-On (SSO) – including social-login,
- allow the third party to contribute with custom UI, especially for headless solutions,
– design system should be in place to allow for consistency.
The concept presented in this post covers just the surface of the challenge. Nevertheless, I hope that it touches the key success-factors of composable architecture, and presents the key difficulties. I believe that whoever is going to be the first one to solve those problems in their domain, will become the new leader. If and for how long? Time will tell!
Like this post? Share it with your colleagues and friends! Make sure to follow our social media for more insights and knowledge.