Skip to main content

Starthub - horizontal deployments

· 6 min read

As a part of my work on Starthub, I found myself having to think of a go-to-market strategy. However, given that Starthub is essentially a platform to deploy generic actions, I would first need to build the some useful actions one by one. This might end up costing me a lot in terms of time and effort, because I have no clear idea of what kind of actions would be actually interesting to users. Given that an action could be pretty much anything, from a single API call to the deployment of a en entire stack, I probably need to focus on action bringing a lot of value in a single transaction to the user. Are there any "ready-made" actions that I know that, for a fact, my users would find useful? Maybe the deployment of an existing open source project could be what I am looking for. In fact, many open source projects are already have a loyal user base, an already a well-defined scope and are painful to deploy. Instead of building full-fledged applications myself, I could just focus on making it simpler to deploy existing ones.

Picking an open source project

There are thousands of open source projects out there. How to pick one? If we think of the pain of deployment as a crater, then the bigger the crater, the greater the value added by Starthub. In this scenario, the crater for an open source project will be defined through:

  • width, which is its popularity. This is measured as the "number of stars on Github" (I know this is a debatable metric, but it's an acceptable proxy).
  • depth, which is the complexity of its deployment. This is measured as the number of cloud primitives needed for the project to be deployed.

It does not take a lot of searching to find out that the biggest "craters" are represented by those open-source projects that have been gaining massive popularity lately. Among the others, Supabase (90k+ Github stars) and n8n (160k+ Github stars) quickly come to mind. Let's focus on n8n, the most popular one. If we look at the docs, there are several solutions to deploy the simplest, single-node configuration of n8n. That's great for small and hobby deployments, but not for production-grade, horizontally scalable deployments. In fact, the docs also offer documentation about queue mode, which allows a much more robust, scalable configuration.

Now, while it's great that n8n also takes into account the needs of power users, the documentation is quite lengthy to read, and, like it often happens, it leaves a lot of guess work up to the user. Could this be a good wedge to promote Starthub?

A different approach to deployments of open source projects

While reading up on forums, I have come across a radically different approach to deploying open source projects that has become increasingly popular lately: Railway. I found Railway very interesting because it chooses different tradeoffs. Starthub tries to be as unopinionated as possbile and hands over full control to the user. In this way, it ensures that any edge case can be potentially handled (think the creation of API keys through a web UI of a platform that offers no APIs for it). However, it also adds complexity, and makes it harder to package actions. On the other hand, railway instead focuses on simplicity in exchange for freedom and security. In fact, Railway lets you deploy a production-grade configuration of n8n with a single click, but it assumes you are ok with never leaving its walled garden.

More incumbents

Several incumbents seem to have adopted a similar approach to Railway;

  • dev.sh, which is the open source version of Railway. Besides hosting your stack, dev.sh makes it possible to deploy the same stacks your own cloud, which basically turns it into a horizontal-first of Coolify or CapRover.

  • Dokploy

  • Elestio also let you host open source projects on the cloud, but without a focus on the

  • Public cloud providers themselves have started offering tools to deploy open source projects with a single click.

Even more alternatives already available: Northflank, Render, Sliplane and others, but, since the focus of this article is about general trends, I will let them aside for the time being.

Takeway: we are moving up the stack

While up until a few years ago we had to install open source projects and, more generally, applications manually on a VPS, we can now instantly install a horizontal configuration of said projects, with the ability of scaling each of the required components (db, vm, etc) manually. In a sense, a new layer seems to be emerging in the world of cloud computing, which solely focuses on the horizontal, scalable configuration of applications.

It is interesting to notice that the managed solutions we mention all rely, in a way or another, on containerisation. Cloud providers like Fly.io and Digital Ocean have been offering managed container solutions for a few years now. It sounds like

Where does Starthub stand then?

Rather than focusing exclusively on deployments, Starthub is more of a swiss knife that can be used to perform arbitray tasks, among which deploying cloud architectures. It can be useful in cases where:

  • A user never wants credentials to be shared with a third party (the sovereignty wedge)

  • A user needs assurance of zero lock-in

  • A user needs to deploy arhitectures across multiple cloud providers

  • A user needs to automate manual configuration steps when deploying a cloud system

What's next?

I wonder whether the next evolution of this trend could be the ability to compose stacks on the cloud in a Lego fashion. Railway and dev.sh are already sort of hinting at this through their UIs, where, if we sprinkled a bit of AI, we could probably get some meaningful autowire functionality to hook services into each other as soon as a new one is dropped into the canvas.

I can think of these future challenges:

  • How to streamline the deployment of application updates?

  • How to allow for additive, manual changes in an existing cloud configuration?

Energy as a tool for social impact

· 5 min read

In the context of energy access, "PUE" stands for "Productive Use of Energy". It describes the use of energy to generate income, create value, or improve productivity. It is the holy grail of energy access and dream of every entrepreneur in the sector. This is because PUE tackles two different problems at once:

  • Financial Viability. PUE creates a virtuous cycle for the utility or energy provider. By enabling income-generating activities, the provider shifts the customer base from low-consumption households to "anchor loads." These users consume more energy and, crucially, have a higher ability to pay because the energy itself is financing their bill. This reduces the risk of default and improves the payback period on infrastructure.

  • Real Poverty Reduction. Unlike basic electrification — which provides comfort (lighting) but rarely changes a family's economic bracket — PUE addresses the productivity gap. By mechanizing labor-intensive tasks (like milling grain or pumping water), energy saves time and multiplies output. This transitions a community from subsistence to a surplus economy, creating local jobs that outlast the initial energy project.

However, PUE is surprisingly hard to achieve. In fact, if the "600 million people without access to energy" pitch allowed companies like Sunking, Zola and a whole wave of solar companies to stimulate investor appetite and develop solutions for the the residential market, where lamps, TV sets, fans and phone chargers, most productive use cases remain mostly untouched. Since productive use cases turn energy into a platform on which customers generate income, it would not be unreasonable to think that they would make productive users better customers. In other words, it does not sound like PUE is hard because of lack of incentives.

Making the pie bigger

If energy use was to be represented as a pyramid, lighting, entertainment through tv sets and stereos and phone charging would be at the very bottom. Refrigeration, egg incubation and grain milling fall slightly higher, while at the very top would activities like well digging, irrigation or field plowing. The higher in the pyramid, the greater the impact the use of energy has on the local community. For example, a small shop can slightly increase their revenue by selling phone recharging as a service. However, the uses of energy at the bottom of the pyramid do not generally lead to deep impact, or, in simpler terms, "make the pie bigger". After all, while it is great that a small business can do more hours thanks to lighting, it is also true that the improvement is marginal. On the other hand, if the farmers of a village could start using machinery to dig a well in a community, then the effects on the community could be deeper.

Is this really about energy?

Of course, a tractor cannot be powered through solar energy, and a well cannot be dug with a solar-powered excavator (yet). This makes me wonder then: is this really about energy, or rather about power? To move the needle on poverty, we don't just need kilowatt-hours spread over a day; we need the high-torque, high-wattage capability to move heavy machinery. We aren't just looking for "light"; we are looking for "force."

Is solar the way to go?

If we assume that "increasing the pie" is more about power than about energy, then the question becomes: is solar the best way to achieve it? Distributed solar is excellent for efficiency and low-load electronics. But for the "top of the pyramid" tasks, the capital expenditure (CAPEX) for batteries and inverters capable of handling high surge currents is often prohibitive for the very communities that need them most.

Domain thinking

I am a big fan of domain thinking. When solving a problem in a given domain is hard, changing domains often improves one's odds to solve it. One of my favourite examples of this is Fourier Transforms in mathematics, where complex integrals can be turned into simple multiplications, which can then be solved much more easily. In our case, if we view PUE strictly as an energy problem, we get stuck on the physical constraints of hardware, which make it prohibitively expensive to solve any kind of problem in a context which is extremely price sensitive.

The financial domain

When productivity is seen with the eyes of an investor rather than an engineer, it becomes a financing problem. What if, instead of changing the technical specs of the tools available to the farmers, we just focus on making them more accessible? This can be done in several ways. Here are a couple

  • Rent-to-own. Much like the Solar Home Systems (SHS) revolution, applying lease-to-own models to high-torque machinery lowers the barrier to entry. The asset itself acts as the collateral, and the increased yield from the "power" pays for the "energy."

  • Cost-spreading. Companies like Hello Tractor offer on-demand plowing services to farmers. This shifts the burden from CAPEX (buying the tractor) to OPEX (paying for the service), allowing the "pie-growing" activity to happen without the user needing to solve the energy/power infrastructure problem themselves.

Final thoughts

As engineers, we often have a tendency to see the problems around us from a technical point of you. If we stay stuck in the "energy domain," we keep trying to build cheaper batteries or smarter load-shedding algorithms. These are noble pursuits, but they are incremental. However, when we shift domains—into finance, logistics, or even community-based ownership models—we find that we can bypass technical bottlenecks entirely. We stop trying to make a solar panel act like a diesel generator and instead build a financial system that makes the "force" of a diesel generator affordable, or a sharing economy that makes it efficient.

Starthub - levels of abstraction

· 3 min read

Over the past weeks I have been spending a non-negligible amount of time working with Supabase, a platform for backend development that has exploded in popularity in the recent years. Sitting on top of Postgres, Supabase uses classic SQL dumps to dump and restore a database schema. SQL dumps are not just snapshots of a database, but can be seen as a way to package the entire business logic of a company. At the same time, in one of my past blog posts I explored the idea of a top-to-bottom approach to building software, where an entire system could be deployed with a single command independently of the amount and complexity of its components. The question needs then to be asked: could SQL be the right tool to package entire systems, so they can be exchanged, traded and deployed in a standardised way? And, if not, what would such a tool look like? What would be the best level of abstraction to work at?

Starthub - the right mental model

· 4 min read

Over the past few months I have been thinking quite a bit about what a tool to deploy stacks of arbitrary complexity should look like. I have already written two blog posts about the rationale and best level of abstraction to implement it. So far, it looks like some kind of orhestration engine powered by Docker/WebAssembly. This would allow to package systems (frontend, backend, any kind really) of arbitrary complexity, potentially allowing to run UI-based configuration steps automatically. This is far from a complete model though, and many questions are still unanswered.

IEC specifications for software

· 4 min read

Recently I have had to implement a token generation system to help customers buy energy in solar minigrids. The STS standard is widely used in Sub Saharan countries, and it specifies how tokens should be generated to control energy, gas and water meters. Most smart energy meters deployed in the Sub Saharan region follow 62055-41. For the ones that do not know (I didn't), IEC specifications are international technical standards published by the IEC (International Electrotechnical Commission). They define how electrical, electronic, and digital systems should be designed, built, tested, and made interoperable. Reading such kind of specification for the first time made me wonder about why such specifications are needed, why they are not mainstream in the world of software and whether they could add value to it.

Starthub - IKEA for software

· 5 min read

The company I currently work for offers a software platform to manage solar mini-grids. From a user's perspective, the platform consists of an admin panel and a suite of mobile-first web apps that our customers use to manage their solar mini-grids. It is a fairly standard IoT system: a storage layer (Postgres, Timescale and S3 are the main tools), a view layer (Vue web apps) and a business logic layer, consisting of a set of NestJS services for timeseries data and financial transaction processing. The schema is also nothing new: a set of tables to track the state of energy meters, process inbound and outbound commands from and to devices on the field and some RBAC tables were some of the center pieces.

Yet, it took us about a year to reach a production-grade setup. RLS rules, social login, embedding of a Grafana charts in the frontend, setting up a timeseries database a double entry accounting system took us months to get right. During that year, I often wondered about a faster, simpler way to get started from a ready-made, use-case-focused template instead of rebuilding everything, once again, from scratch. After all, our system was not too different from, say, one of those fleet-tracking systems that are commonly offered in the market I guess?

A Google Pay for Venezuela

· 3 min read

I am currently in Venezuela, visiting a dear friend. While I fell in love with this country, I had the chance to also witness the systemic inefficiencies of the place. Some are tragic, like the lack of electricity, basic healthcare services and safety, while others are much more mundane (even if still annoying), like the limited choice of products at the supermarket or the lack in digital payment solutions.

Landscape

  • Pago Movil, for which a user needs access to a local bank account.

  • Zinli, for which users need a foreign bank account.

  • Binance, which is instead focused on crypto transactions.

Any other option, like Venmo, Google Pay and the likes, are not available in the country.

What a solution could look like

It looks like some of my past work in building transaction systems in mini-grids in Sub Saharan Africa could be useful. In fact, this sounds like a double entry ledger could help tackle the problem. An MVP would consist exclusively of a fund transfer feature:

  1. The sender scans the QR code of the recipient or insert their handle into the mobile app.

  2. Both sender and receiver are notified of the transaction.

In this blog post I will go over the implementation of the system in detail, which I built in a couple of weeks of free time.

How do users get money in and out of the system?

Great, so we figured out the basics of how to get users to send funds to each other. But how do they get to convert their funds into fiat and viceversa? Here is where things got tricky. In fact, while implementing the transaction engine proved to be fairly straight forward,

  • We could build integrations with the Stripes of the world to issue payouts to our users. However, as a part of the sign up process, payment processors ask their business users to provide documentation that, at the moment, is pretty much impossible to obtain from the Venezuelan Ministry of Economy and Finances.

  • We could build a network of wakalas similar to the one Safaricom developed in Eastern Africa with Mpesa. However, building such a network would require massive amounts of funding and years of work. The political and economic instability of the country would probably make it extra hard to raise funding.

Yapago

Takeaway: the obstacle is regulation, not tech

At the moment it looks like, with my resources, connections and commitments, the best I can do is a wrapper around current solutions (eg Binance). This would remove the core value of my solution, and make it too dependent on the current solutions, which could easily eliminate my solution by implementing a new lightweight interface.

A Google Maps for events?

· 7 min read

Every few months I see a new events app come up. By "events" I mean music concerts, sport competitions, social gatherings, art shows and more. It seems that, at some point, every developer has thought of showing on a list or map an index of all the events that are happening in their city. I definitely see the appeal of the idea, for several reasons:

  • As a frequent traveller, I often find myself in unfamiliar contexts, and would like to know what's happening around me.

  • I am not on social media, nor am I thrilled at the idea of being exposed to mountains of social media slop just to know what's happening around me.

  • Like an increasing amount of people, I am looking for opportunities to meet new friends offline. An events app would help me navigate offline opportunities.

Yet, the domains of those apps that I see periodically launch on Hacker News inevitably become vacant after a year, and we are back to square one. Why don't we have a "Netflix of events" yet?

The landscape

There are several solutions that currently sit at the edge of an "index of events":

  • Eventbrite, Meetup, Luma. These services are specialised on social events, and serve as platforms to help organisers schedule, promote and manage events, while helping attendees discover and rsvp. They are probably the closest thing to what a "Google Maps for Events" should look like. By only focusing on social gatherings, they gloss over all those events that are social, but not peer organised (eg a music concert).

  • Social media. This is probably the biggest one. Whatsapp groups, Instagram accounts hold massive amounts of information about what is happening and where. However, their incentive to expose users to as much content as possible defeats the very attempt of getting offline mentioned above.

  • Facebook Events. Again, social media.

  • Songkick and similar music concert apps are fairly popular in the music world. However, since they are only focused on music events, they net they cast is not wide enough to capture mainstream interest.

  • Ticketing platforms. Ticketmaster only promote events they manage.

  • City official web sites. Berlin is a great example. However, these are fragmented, often publicly-maintained web sites that only report geographically siloed data.

  • Location-specific newsletters and blogs. In expat and nomad hotspots like Berlin, London or Madrid, niche services like Social Butterfly radar are common. However, just like city web sites, they are fragmented and hyperlocal.

  • Fever and similar services are positioned as a commercial product rather than an index. While Google Maps feels like a free index (I'd say on a similar level to Wikipedia), Fever feels like a closed, commercial system, where non-monetizable events are not reported.

  • Airbnb experiences, Tripadvisor Activities and similar services all offer custom, on demand experiences that are not exactly events.

Why hasn't Googles done it?

Somehow, big tech seems to have avoided this problem so far. Google is probably the firm that would be best positioned to tackling the problem of events. In fact, event data could be presented either as a feature in their maps product, in the form of smart cards in the search engine or as a standalone product.

I can think of a few reasons why this might not have happened yet:

  • It does not pass the toothbrush test. However, one can argue that some of Google Products do not pass that test either.

  • It is not profitable enough. However, Google seems to have shipped many products in the name of collecting information.

  • Unlike information about businesses on Google Maps, information about events changes too quickly. However, products like Google Travel also expose information that changes quickly.

  • Businesses on Google Maps want to be found by definition. This is not necessarily true for events, where exclusivity is sometimes a feature.

On the other hand, I can also think of reasons why a company like Google might be interested in building something like this:

  • It would improve their search business.

  • It would add value to their maps product by gathering more data about what events are people are more interested in.

Doubts

There are a few aspects of this idea that are still unclear:

  • Would a movie at the cinema be an event?

  • Would a long-running art show in a gallery be an event?

  • Do experiences count as events?

What could an events app look like?

Although I have been using Google Maps as a point of reference, I wonder whether a map would be the best interface to represent what's effectively a list of items. After all, maybe there is a reason why Eventbrite and Meetup decided on using an Airbnb-like interface that makes it easy to both filter events and visualise how they are geographically distributed in a city.

How to populate the platform?

As per the typical aggregator, an events service is only useful as long as it has plenty of good quality content in it. At the same time, because data is siloed and incumbents actively protect it (it's their moat), seeding the platform becomes the biggest problem. Here is what I can think of:

  • Seeding long-term events manually. Sports competitions, art shows, music events are generally scheduled long before they take place. In other words, they have a long "shelf life". Starting with these events means targeting the low-hanging fruits first.

  • Gamification. Very much like it helped Google Maps and StackOverflow, gamification could create soft incentives for users to help seed data. For example, showing how many users have viewed an event posted by another user would encourage users to participate more.

  • Scraping chat groups on instant messaging platforms such as Whatsapp and Telegram by leveraging protocols like Matrix.

Challenges

  • Event duplication. Aggregating information from different sources almost always involves a risk of duplication, which means that deduplication mechanisms are needed. The solution to this kind of problem should be within reach, since a combination of name, description, date and location of the event should allow to understand with a relatively high degree of certainty whether two items are actually referring to the same event.

Go-to-market strategy

Because its target audience is so vast, an event platform would be extremely hard to promote all at once. The most successful attempt I have seen so far is Luma, which focuses specifically on tech events. This seems to be a classic example of "finding a narrow edge and sticking to it" until the plaform is popular enough to allow for dilution into other verticals, such as sports events and sports events.

Technical implementation

Given the relational nature of the data, a SQL database should cover most of the needs of this use case. A PWA would probably also be enough for an MVP, and it would allow:

  • A single codebase

  • Low enough UI latency

  • Repackaging for Play Store and App Store if needed, while avoiding publication hell every time an API endpoint changes

Technical vs business decentralisation

· 6 min read

In the last few months I have had the chance to work from a coworking space focused on crypto projects. It has been an interesting experience, for several reasons. First, I was reminded, once again, how vast and differentiated the world of software actually is. While web development, CRUD or mobile apps make the news that reach me with the most frequency, there are hundreds, if not thousands, of verticals that I am completely unfamiliar with. Secondly, interacting with developers that work in the crypto space gave me a chance to think more deeply about ideas that I was already aware of, but that I had never really put serious thought into.

Decentralisation

One of the most frequent doubts I have is about the domain of space in which crypto is trying to give solutions: is it attempting to give a technical solution to a practical problem? Initially, it definitely looked so. In fact, while Bitcoin definitely made it easier to exchange currency in a with no middlemen, it is also true that most users would still want the backup option of being able to get human assistance when things go South (eg losing their credentials). However, the developers I met at the coworking space often mentioned how they were building new tools to allow for credential recovery and much more. It sounds like, as the ecosystem evolves, new, more user-friendly tools become available to users and lower the barrier to access to the crypto world. If this is the general trend, then, given enough time, decentralised tools should completely remove the need for a man in the middle. In this sense, while we maybe started out 15 years ago with a partial solution, we are now getting closer to a viable system that, besides being technically decentralised, also allows for business decentralisation. This brings about a question: do businesses actually want decentralisation?

Natural monopolies and conflicts of interest

The fewer options a customer enjoys, the more negotiating power a business has over them. That is, businesses has a natural incentive to decrease optionality in the market as much as possible. Silicon Valley famously encourages entrepreneurs to attempt to build monopolies. The question then follows: if businesses have a natural drive towards centralisation, does this not create a conflict of interest with the techical efforts for decentralisation in software?

Exhibit A: the Internet

A great example of technical vs business decentralisation is the Internet. Built on decentralised technology, it was designed to have no single points of failure. Protocols like TCP/IP, HTTP, and SMTP are essentially "public goods" — no one owns them, and anyone can use them to build a service.

However, as the Internet matured, business forces aggressively centralised services built on top of it. While the "pipes" remained decentralised, the business-related components - for example data - became massive silos. Companies like Google, Amazon, and Meta leveraged network effects to create digital monopolies. They offered convenience and "free" services in exchange for control over data and user attention. This suggests that technical decentralisation is a necessary but insufficient condition for a truly open ecosystem. Without a way to align economic incentives, the gravity of business will always pull back toward a central hub.

Exhibit B: email vs instant messaging

A similar dynamic played out with messaging protocols in the form of email and instant messaging (Slack, WhatsApp, Discord, Telegram and the likes). Email can effectively be considered a "decentralised" protocol: no single individual or company owns email. Because it is built on open standards (SMTP, IMAP), a Gmail user can message a ProtonMail user, who can then read that message on an iPhone mail client or a Linux terminal. There is no single "CEO of Email" who can decide to shut down the network or force everyone to use the same app.

On the other hand, instant messaging networks are built on centralised designs. As a result, the Slacks and WhatsApps of the world are essentially walled gardens. One cannot send a WhatsApp message to a Slack channel (well, technically it can be done, but these networks were not built with to interact that way). As a consequence, the companies running these services have then come to own the entire stack: the protocol, the server, and the interface. From a user experience perspective, Slack is often "better" than email. It’s faster, has better emoji reactions, and feels more cohesive. Why? Because centralisation allows for rapid iteration. Slack can push a new feature to every user overnight. Email, being decentralised, takes decades to change because thousands of independent providers have to agree on new standards. As a developer, this is the ultimate "quality vs. sovereignty" trade-off. We often sacrifice the freedom of decentralised protocols (email) for the polished, high-velocity features of centralised products (Slack).

What are the best use cases for blockchains then?

If business naturally trends toward centralisation for the sake of efficiency, then the best use cases for blockchains are those where trust is more valuable than efficiency. These are some of the examples I can think of:

  • Voting systems. In a centralized system, the "admin" can modify the database. A blockchain-based system, where any citizen can potentially spin up a node to validate election results, makes the voting process "censorship-resistant" and "auditable," where the cost of a slow tally is worth the guarantee that the result is tamper-proof.

  • Philantropy. Donors often distrust how funds are allocated. Smart contracts can ensure that funds are only released when specific, verifiable milestones are met, providing "programmatic transparency" that a traditional charity cannot match.

  • Asset registries (land, IP, carbon credits). These require a "permanent memory" that outlasts any single government or corporation. By using a blockchain, one ensures that a title deed or carbon credit cannot be double-counted or deleted by a corrupt official.

Conclusion

Technical decentralization and business decentralization are distinct forces. While we often view decentralization as a "silver bullet" for freedom, we must acknowledge that centralization is an efficiency feature. Especially in the business world, centralization enables the specialization and rapid scaling that form the foundation of our economy. The challenge for the current generation of developers isn't to decentralize everything, but to identify the specific areas where sovereignty is worth the inefficiency tax.

Challenges in refrigeration monitoring

· 2 min read

Industrial and Commercial refrigeration is a $100b+ industry. Customers are generally large, deep-pocketed companies with regular budget cycles that, given the right incentives, should have no problem adopting new technologies. However, while waves of innovation like Industry 4.0 and programs such as Horizon 2020 had profound effects on other sectors, the refrigeration sector remained mostly untouched.

Remote monitoring

The most classic case of this disconnec is remote monitoring. In fact, when trends like the Internet of Things gained popularity by promising a future where every device would be connected to the cloud, the main doubts were about the associated costs. In fact, no matter how cheap data bundles and hardware get, not all devices need to be connected to the internet. on the other hand, industrial and commercial refrigeration, being a comparatively "wealthy" sector, should not have much trouble covering the cost of bringing online massive pieces of equipment that. This should be especially true in a sector where downtime usually corresponds to non-negligible losses. Yet, it's 2022, and those markets that are so quickly adopting automatic checkouts have done little to collect data about the efficiency of their machinery. Data produced by commercial and industrial machinery is still tragically hard to retrieve. Is it really just about costs then?

Refrigeration machine in a supermarket

The one-time-fee curse

The core tension is really about payment structures. Medium and large businesses have a strong incentive to be as independent as possible and own as much as possible of the vertical. This clashes with the recurring payment structure that is typical of the monitoring world.

Hardware platforms and lock-in

Another problem is about the fact that component manufacturers (compressors, inverters, PLC systems) have a strong incentive to try

Will open monitoring ever become mainstream in this sector?