Skip to main content

4 posts tagged with "software"

Software

View All Tags

Starthub - trends in OSS deployments

· 6 min read

As a part of my work on Starthub, I am having to think of a go-to-market strategy. However, given that Starthub is essentially a platform to deploy cloud architectures through generic actions, I would first need to build the some useful actions one by one. This might end up costing a lot in terms of time and effort, because I have no clear idea of what kind of actions would be actually interesting to users. Would developers prefere a simple action that only makes an API call to an API, or would they rather have an action that can deploy a fleet-tracking system in one command? Given that an action could be pretty much anything, I probably need to focus on action bringing a lot of value in a single transaction. Generally this means targeting the deployment of entire cloud architectures then, which take years to develop and built value into. Are there any "ready-made" architectures where I might know, for a fact, that my users would find useful? Open source project could be the way to go. In fact, many open source projects are already have a loyal user base, an already a well-defined scope and are painful to deploy. Instead of building full-fledged applications myself, I could just focus on making it simpler to deploy existing ones.

Picking an open source project

There are thousands of open source projects out there. How to pick one? If we think of the pain of deployment as a crater, then the bigger the crater, the greater the value added by Starthub. In this scenario, the crater for an open source project will be defined by:

  • Width, which is its popularity. This is measured as the "number of stars on Github" (I know this is a debatable metric, but it's an acceptable proxy).
  • Depth, which is the complexity of its deployment. This is measured as the number of cloud primitives needed for the project to be deployed.

It does not take a lot of searching to find out that the biggest "craters" are represented by those open-source projects that have been gaining massive popularity lately and are composed of a lot of primitives, like Supabase (90k+ Github stars) and n8n (160k+ Github stars). Let's focus on n8n, the most popular one. If we look at the docs, there are several solutions to deploy the simplest, single-node configuration of n8n. That's great for small and hobby deployments, but not for production-grade, horizontally scalable deployments. In fact, the docs also offer documentation about queue mode, which allows a much more robust, scalable configuration. Now, while it's great that n8n also takes into account the needs of power users, the documentation is quite lengthy to read, and, like it often happens, it leaves a lot of guess work up to the user. Could this be a good wedge to showcase the power of Starthub?

While reading through forums, I have come across a radically different approach to deploying open source projects that has become increasingly popular lately: the Railway approach. I found Railway very interesting because it is based on different tradeoffs than what Starthub chose. In fact, Starthub tries to be as unopinionated as possible and hands over full control to the user. In this way, it ensures that any edge case can be potentially handled (think the creation of API keys through a web UI of a platform that offers no corresponding APIs). However, it also adds complexity and introduces the danger of state drift. On the other hand, Railway focuses on simplicity in exchange for freedom and security. In fact, Railway lets you deploy a production-grade configuration of n8n with a single click, but it assumes you are ok with never leaving its walled garden.

More incumbents

Several incumbents seem to have adopted a similar approach to Railway:

  • dev.sh, which is the open source version of Railway. Besides hosting your stack, dev.sh makes it possible to deploy the same stacks your own cloud, which basically turns it into a horizontal-first of Coolify or CapRover.

  • Dokploy, which leans heavily into the Docker Swarm ecosystem for horizontal scaling, offering a smoother path to multi-node clusters than many other self-hosted tools.

  • Elestio provides a full-Service experience for over 400+ open-source tools (like n8n, Supabase, or Plausible) while respecting a user's sovereignity.

  • Public cloud providers like Fly.io and DigitalOcean have been offering managed container solutions for a while now.

  • Even more alternative: Northflank, Render, Sliplane and others all have slightly different takes on this problem, but, since the focus of this article is about general trends, I will let them aside for now.

We are moving up the stack

What's the general trend then? While up until a few years ago we had to install open source projects and, more generally, applications manually on a VPS, we can now instantly install a horizontal configuration of said projects, with the ability of scaling each of the required components (db, vm, etc) manually. In a sense, a new layer seems to be emerging in the world of cloud computing, which solely focuses on the horizontal, scalable configuration of applications. What's interesting to notice is that the managed solutions mentioned above all rely, in a way or another, on containerisation.

Conclusion: where does Starthub stand?

Rather than focusing exclusively on deployments, Starthub is more of a swiss knife that can be used to perform arbitray tasks, among which deploying cloud architectures. It can be useful in cases where:

  • A user never wants credentials to be shared with a third party (the sovereignty wedge).

  • A user needs assurance of zero lock-in.

  • A user needs to deploy arhitectures across multiple cloud providers.

  • A user needs to automate manual configuration steps when deploying a cloud system.

Google Pay for Venezuela

· 3 min read

I am currently in Venezuela, visiting a dear friend. While I fell in love with this country, I had the chance to also witness the systemic inefficiencies of this place. Some are tragic, like the lack of electricity, basic healthcare services and safety, while others are much more mundane (even if still annoying), like the limited choice of products at the supermarket or the lack in digital payment solutions. So I started wondering: what could a Google Pay for Venezuela look like?

Landscape

The incumbents that I am aware in the Venezuelan fintech sector are:

  • Pago Movil, for which users need access to a local bank account. Besides that, the UX is often clunky, and it’s tied to the bolívar (VES), which suffers from hyperinflation. It is also worth mentioning that in Venezuela, people often use USD for stability but Pago Movil for liquidity. Yapago would aim for a middle ground, since its ledger would track USD rather than VES.

  • Zinli, for which users need a foreign bank account. While popular, Zinli is effectively a dollarized wallet backed by Banco Mercantil Panama. It leverages US-based protocols like ACH (Automated Clearing House) network and Visa’s rails through Zelle, which means that it's essentially a solution for the "connected" elite or those with family abroad.

  • Binance, which is a generic solution focused on crypto transactions.

It seems that there might be room for a simple, local platform to exchange funds.

What a solution could look like

Some of my experience in building transaction systems for solar mini-grids in Sub Saharan Africa could be useful. In fact, it sounds like a double entry ledger could help tackle the problem. An MVP would consist exclusively of a fund transfer feature:

  1. The sender scans the QR code of the recipient or insert their handle into the mobile app.

  2. Both sender and receiver are notified of the transaction.

The on-ramp problem

Great, so we figured out the basics of how to get users to send funds to each other. However, such system is only useful if it allows users to freely move money in and out of the system. In other words: how would users convert their funds into fiat and viceversa? Here is where things get tricky. In fact, while implementing the transaction engine is fairly straight forward, creating a bridge between this system and the local banking system is a compliance problem.

  • We could build integrations with the Stripes of the world to issue payouts to our users. However, as a part of the sign up process, payment processors ask their business users to provide documentation that, at the moment, is pretty much impossible to obtain from the Venezuelan Ministry of Economy and Finances.

  • We could build a network of wakalas similar to the one Safaricom developed in Eastern Africa with Mpesa. However, building such a network would require massive amounts of funding and years of work. The political and economic instability of the country would probably make it extra hard to raise funding.

Yapago

Conclusion

At the moment it looks like, with my resources, connections and commitments, the best I can do is a wrapper around current solutions (eg Binance). This would remove the core value of my solution while making it overly dependent on incumbent services, which could easily eliminate my solution by implementing a new lightweight interface.

Technical vs business decentralisation

· 6 min read

In the last few months I have had the chance to work from a coworking space focused on crypto projects. It has been an interesting experience, for several reasons. First, I was reminded, once again, how vast and differentiated the world of software actually is. While web development, CRUD or mobile apps make the news that reach me with the most frequency, there are hundreds, if not thousands, of verticals that I am completely unfamiliar with. Secondly, interacting with developers that work in the crypto space gave me a chance to think more deeply about ideas that I was already aware of, but that I had never really put serious thought into.

Decentralisation

One of the most frequent doubts I have is about the domain of space in which crypto is trying to give solutions: is it attempting to give a technical solution to a practical problem? Initially, it definitely looked so. In fact, while Bitcoin definitely made it easier to exchange currency in a with no middlemen, it is also true that most users would still want the backup option of being able to get human assistance when things go South (eg losing their credentials). However, the developers I met at the coworking space often mentioned how they were building new tools to allow for credential recovery and much more. It sounds like, as the ecosystem evolves, new, more user-friendly tools become available to users and lower the barrier to access to the crypto world. If this is the general trend, then, given enough time, decentralised tools should completely remove the need for a man in the middle. In this sense, while we maybe started out 15 years ago with a partial solution, we are now getting closer to a viable system that, besides being technically decentralised, also allows for business decentralisation. This brings about a question: do businesses actually want decentralisation?

Natural monopolies and conflicts of interest

The fewer options a customer enjoys, the more negotiating power a business has over them. That is, businesses has a natural incentive to decrease optionality in the market as much as possible. Silicon Valley famously encourages entrepreneurs to attempt to build monopolies. The question then follows: if businesses have a natural drive towards centralisation, does this not create a conflict of interest with the techical efforts for decentralisation in software?

Exhibit A: the Internet

A great example of technical vs business decentralisation is the Internet. Built on decentralised technology, it was designed to have no single points of failure. Protocols like TCP/IP, HTTP, and SMTP are essentially "public goods" — no one owns them, and anyone can use them to build a service.

However, as the Internet matured, business forces aggressively centralised services built on top of it. While the "pipes" remained decentralised, the business-related components - for example data - became massive silos. Companies like Google, Amazon, and Meta leveraged network effects to create digital monopolies. They offered convenience and "free" services in exchange for control over data and user attention. This suggests that technical decentralisation is a necessary but insufficient condition for a truly open ecosystem. Without a way to align economic incentives, the gravity of business will always pull back toward a central hub.

Exhibit B: email vs instant messaging

A similar dynamic played out with messaging protocols in the form of email and instant messaging (Slack, WhatsApp, Discord, Telegram and the likes). Email can effectively be considered a "decentralised" protocol: no single individual or company owns email. Because it is built on open standards (SMTP, IMAP), a Gmail user can message a ProtonMail user, who can then read that message on an iPhone mail client or a Linux terminal. There is no single "CEO of Email" who can decide to shut down the network or force everyone to use the same app.

On the other hand, instant messaging networks are built on centralised designs. As a result, the Slacks and WhatsApps of the world are essentially walled gardens. One cannot send a WhatsApp message to a Slack channel (well, technically it can be done, but these networks were not built with to interact that way). As a consequence, the companies running these services have then come to own the entire stack: the protocol, the server, and the interface. From a user experience perspective, Slack is often "better" than email. It’s faster, has better emoji reactions, and feels more cohesive. Why? Because centralisation allows for rapid iteration. Slack can push a new feature to every user overnight. Email, being decentralised, takes decades to change because thousands of independent providers have to agree on new standards. As a developer, this is the ultimate "quality vs. sovereignty" trade-off. We often sacrifice the freedom of decentralised protocols (email) for the polished, high-velocity features of centralised products (Slack).

What are the best use cases for blockchains then?

If business naturally trends toward centralisation for the sake of efficiency, then the best use cases for blockchains are those where trust is more valuable than efficiency. These are some of the examples I can think of:

  • Voting systems. In a centralized system, the "admin" can modify the database. A blockchain-based system, where any citizen can potentially spin up a node to validate election results, makes the voting process "censorship-resistant" and "auditable," where the cost of a slow tally is worth the guarantee that the result is tamper-proof.

  • Philantropy. Donors often distrust how funds are allocated. Smart contracts can ensure that funds are only released when specific, verifiable milestones are met, providing "programmatic transparency" that a traditional charity cannot match.

  • Asset registries (land, IP, carbon credits). These require a "permanent memory" that outlasts any single government or corporation. By using a blockchain, one ensures that a title deed or carbon credit cannot be double-counted or deleted by a corrupt official.

Conclusion

Technical decentralization and business decentralization are distinct forces. While we often view decentralization as a "silver bullet" for freedom, we must acknowledge that centralization is an efficiency feature. Especially in the business world, centralization enables the specialization and rapid scaling that form the foundation of our economy. The challenge for the current generation of developers isn't to decentralize everything, but to identify the specific areas where sovereignty is worth the inefficiency tax.

Challenges in refrigeration monitoring

· 4 min read

Industrial and Commercial refrigeration is a $100b+ industry. Customers are generally large, deep-pocketed companies with regular budget cycles that, on paper, are the ideal candidates for digital transformation. However, while waves of innovation like Industry 4.0 and programs such as Horizon 2020 had profound effects on other sectors, the refrigeration sector remains mostly untouched.

Remote monitoring

The most classic case of this disconnec is remote monitoring. In fact, when trends like the Internet of Things gained popularity by promising a future where every device would be connected to the cloud, the main doubts were about the associated costs. In fact, no matter how cheap data bundles and hardware get, not all devices need to be connected to the internet. On the other hand, industrial and commercial refrigeration, being a comparatively "wealthy" sector, should not have much trouble covering the cost of bringing online pieces of equipment that are easily woth $200,000+. This should be especially true in a sector where downtime usually corresponds to non-negligible losses. Yet, it's 2022, and those same companies whose supermarkets where so quick to adopt automatic checkouts have done little to collect and store data about their machinery. In fact, data produced by commercial and industrial machinery is still tragically hard to retrieve. Is it really just about costs then?

Refrigeration machine in a supermarket

Main causes

  • The one-time-fee curse. The core tension is really about payment structures. Medium and large businesses have a strong incentive to be as independent as possible and own as much as possible of the vertical. This clashes with the recurring payment structure that is typical of the monitoring world. SaaS (Software as a Service) models are often viewed as a "tax" rather than a tool, leading many firms to prefer a decaying, offline asset they own over a superior, connected one they have to rent.

  • Hardware vendor lock-in. Another problem is the fact that component manufacturers (compressors, inverters, PLC systems) have a strong incentive to try and build their own walled gardens. By using proprietary communication protocols instead of open standards like Modbus or MQTT, they force the customer into using their specific monitoring software. This creates a fragmented ecosystem where a supermarket manager might need five different apps to check the health of a single cold room.

  • Perceived added value. If component manufacturers and software providers cannot bridge the gap between data and insights, the investment feels hollow. In many cases, monitoring systems are sold as simple alarm triggers — telling a manager the temperature is too high after the inventory has already begun to spoil. Without predictive maintenance or clear energy-saving metrics, the ROI (Return on Investment) remains theoretical in the eyes of the budget holders.

  • If it ain't broke mentality. Unlike consumer electronics, industrial refrigeration assets are built to last 15 to 20 years. This long lifecycle creates a massive "brownfield" problem. Integrating modern IoT sensors with a compressor built in the early 2000s requires specialized labor and custom gateways, often making the installation process more expensive than the hardware itself.

Conclusion

The refrigeration sector stands at a crossroads. The financial capacity for innovation is there, and the environmental urgency—driven by energy costs has never been higher. However, the transition to a truly smart cold chain is being throttled not by a lack of technology, but by misaligned incentives and legacy business models. For the industry to finally catch up with the rest of the Industrial 4.0 revolution, we must move past proprietary silos and embrace interoperability. Until we solve the friction between hardware ownership and software intelligence, the data generated by these machines will continue to vanish into thin air.