Skip to main content

4 posts tagged with "software"

Software

View All Tags

Starthub - horizontal deployments

· 6 min read

As a part of my work on Starthub, I found myself having to think of a go-to-market strategy. However, given that Starthub is essentially a platform to deploy generic actions, I would first need to build the some useful actions one by one. This might end up costing me a lot in terms of time and effort, because I have no clear idea of what kind of actions would be actually interesting to users. Given that an action could be pretty much anything, from a single API call to the deployment of a en entire stack, I probably need to focus on action bringing a lot of value in a single transaction to the user. Are there any "ready-made" actions that I know that, for a fact, my users would find useful? Maybe the deployment of an existing open source project could be what I am looking for. In fact, many open source projects are already have a loyal user base, an already a well-defined scope and are painful to deploy. Instead of building full-fledged applications myself, I could just focus on making it simpler to deploy existing ones.

Picking an open source project

There are thousands of open source projects out there. How to pick one? If we think of the pain of deployment as a crater, then the bigger the crater, the greater the value added by Starthub. In this scenario, the crater for an open source project will be defined through:

  • width, which is its popularity. This is measured as the "number of stars on Github" (I know this is a debatable metric, but it's an acceptable proxy).
  • depth, which is the complexity of its deployment. This is measured as the number of cloud primitives needed for the project to be deployed.

It does not take a lot of searching to find out that the biggest "craters" are represented by those open-source projects that have been gaining massive popularity lately. Among the others, Supabase (90k+ Github stars) and n8n (160k+ Github stars) quickly come to mind. Let's focus on n8n, the most popular one. If we look at the docs, there are several solutions to deploy the simplest, single-node configuration of n8n. That's great for small and hobby deployments, but not for production-grade, horizontally scalable deployments. In fact, the docs also offer documentation about queue mode, which allows a much more robust, scalable configuration.

Now, while it's great that n8n also takes into account the needs of power users, the documentation is quite lengthy to read, and, like it often happens, it leaves a lot of guess work up to the user. Could this be a good wedge to promote Starthub?

A different approach to deployments of open source projects

While reading up on forums, I have come across a radically different approach to deploying open source projects that has become increasingly popular lately: Railway. I found Railway very interesting because it chooses different tradeoffs. Starthub tries to be as unopinionated as possbile and hands over full control to the user. In this way, it ensures that any edge case can be potentially handled (think the creation of API keys through a web UI of a platform that offers no APIs for it). However, it also adds complexity, and makes it harder to package actions. On the other hand, railway instead focuses on simplicity in exchange for freedom and security. In fact, Railway lets you deploy a production-grade configuration of n8n with a single click, but it assumes you are ok with never leaving its walled garden.

More incumbents

Several incumbents seem to have adopted a similar approach to Railway;

  • dev.sh, which is the open source version of Railway. Besides hosting your stack, dev.sh makes it possible to deploy the same stacks your own cloud, which basically turns it into a horizontal-first of Coolify or CapRover.

  • Dokploy

  • Elestio also let you host open source projects on the cloud, but without a focus on the

  • Public cloud providers themselves have started offering tools to deploy open source projects with a single click.

Even more alternatives already available: Northflank, Render, Sliplane and others, but, since the focus of this article is about general trends, I will let them aside for the time being.

Takeway: we are moving up the stack

While up until a few years ago we had to install open source projects and, more generally, applications manually on a VPS, we can now instantly install a horizontal configuration of said projects, with the ability of scaling each of the required components (db, vm, etc) manually. In a sense, a new layer seems to be emerging in the world of cloud computing, which solely focuses on the horizontal, scalable configuration of applications.

It is interesting to notice that the managed solutions we mention all rely, in a way or another, on containerisation. Cloud providers like Fly.io and Digital Ocean have been offering managed container solutions for a few years now. It sounds like

Where does Starthub stand then?

Rather than focusing exclusively on deployments, Starthub is more of a swiss knife that can be used to perform arbitray tasks, among which deploying cloud architectures. It can be useful in cases where:

  • A user never wants credentials to be shared with a third party (the sovereignty wedge)

  • A user needs assurance of zero lock-in

  • A user needs to deploy arhitectures across multiple cloud providers

  • A user needs to automate manual configuration steps when deploying a cloud system

What's next?

I wonder whether the next evolution of this trend could be the ability to compose stacks on the cloud in a Lego fashion. Railway and dev.sh are already sort of hinting at this through their UIs, where, if we sprinkled a bit of AI, we could probably get some meaningful autowire functionality to hook services into each other as soon as a new one is dropped into the canvas.

I can think of these future challenges:

  • How to streamline the deployment of application updates?

  • How to allow for additive, manual changes in an existing cloud configuration?

A Google Pay for Venezuela

· 3 min read

I am currently in Venezuela, visiting a dear friend. While I fell in love with this country, I had the chance to also witness the systemic inefficiencies of the place. Some are tragic, like the lack of electricity, basic healthcare services and safety, while others are much more mundane (even if still annoying), like the limited choice of products at the supermarket or the lack in digital payment solutions.

Landscape

  • Pago Movil, for which a user needs access to a local bank account.

  • Zinli, for which users need a foreign bank account.

  • Binance, which is instead focused on crypto transactions.

Any other option, like Venmo, Google Pay and the likes, are not available in the country.

What a solution could look like

It looks like some of my past work in building transaction systems in mini-grids in Sub Saharan Africa could be useful. In fact, this sounds like a double entry ledger could help tackle the problem. An MVP would consist exclusively of a fund transfer feature:

  1. The sender scans the QR code of the recipient or insert their handle into the mobile app.

  2. Both sender and receiver are notified of the transaction.

In this blog post I will go over the implementation of the system in detail, which I built in a couple of weeks of free time.

How do users get money in and out of the system?

Great, so we figured out the basics of how to get users to send funds to each other. But how do they get to convert their funds into fiat and viceversa? Here is where things got tricky. In fact, while implementing the transaction engine proved to be fairly straight forward,

  • We could build integrations with the Stripes of the world to issue payouts to our users. However, as a part of the sign up process, payment processors ask their business users to provide documentation that, at the moment, is pretty much impossible to obtain from the Venezuelan Ministry of Economy and Finances.

  • We could build a network of wakalas similar to the one Safaricom developed in Eastern Africa with Mpesa. However, building such a network would require massive amounts of funding and years of work. The political and economic instability of the country would probably make it extra hard to raise funding.

Yapago

Takeaway: the obstacle is regulation, not tech

At the moment it looks like, with my resources, connections and commitments, the best I can do is a wrapper around current solutions (eg Binance). This would remove the core value of my solution, and make it too dependent on the current solutions, which could easily eliminate my solution by implementing a new lightweight interface.

Technical vs business decentralisation

· 6 min read

In the last few months I have had the chance to work from a coworking space focused on crypto projects. It has been an interesting experience, for several reasons. First, I was reminded, once again, how vast and differentiated the world of software actually is. While web development, CRUD or mobile apps make the news that reach me with the most frequency, there are hundreds, if not thousands, of verticals that I am completely unfamiliar with. Secondly, interacting with developers that work in the crypto space gave me a chance to think more deeply about ideas that I was already aware of, but that I had never really put serious thought into.

Decentralisation

One of the most frequent doubts I have is about the domain of space in which crypto is trying to give solutions: is it attempting to give a technical solution to a practical problem? Initially, it definitely looked so. In fact, while Bitcoin definitely made it easier to exchange currency in a with no middlemen, it is also true that most users would still want the backup option of being able to get human assistance when things go South (eg losing their credentials). However, the developers I met at the coworking space often mentioned how they were building new tools to allow for credential recovery and much more. It sounds like, as the ecosystem evolves, new, more user-friendly tools become available to users and lower the barrier to access to the crypto world. If this is the general trend, then, given enough time, decentralised tools should completely remove the need for a man in the middle. In this sense, while we maybe started out 15 years ago with a partial solution, we are now getting closer to a viable system that, besides being technically decentralised, also allows for business decentralisation. This brings about a question: do businesses actually want decentralisation?

Natural monopolies and conflicts of interest

The fewer options a customer enjoys, the more negotiating power a business has over them. That is, businesses has a natural incentive to decrease optionality in the market as much as possible. Silicon Valley famously encourages entrepreneurs to attempt to build monopolies. The question then follows: if businesses have a natural drive towards centralisation, does this not create a conflict of interest with the techical efforts for decentralisation in software?

Exhibit A: the Internet

A great example of technical vs business decentralisation is the Internet. Built on decentralised technology, it was designed to have no single points of failure. Protocols like TCP/IP, HTTP, and SMTP are essentially "public goods" — no one owns them, and anyone can use them to build a service.

However, as the Internet matured, business forces aggressively centralised services built on top of it. While the "pipes" remained decentralised, the business-related components - for example data - became massive silos. Companies like Google, Amazon, and Meta leveraged network effects to create digital monopolies. They offered convenience and "free" services in exchange for control over data and user attention. This suggests that technical decentralisation is a necessary but insufficient condition for a truly open ecosystem. Without a way to align economic incentives, the gravity of business will always pull back toward a central hub.

Exhibit B: email vs instant messaging

A similar dynamic played out with messaging protocols in the form of email and instant messaging (Slack, WhatsApp, Discord, Telegram and the likes). Email can effectively be considered a "decentralised" protocol: no single individual or company owns email. Because it is built on open standards (SMTP, IMAP), a Gmail user can message a ProtonMail user, who can then read that message on an iPhone mail client or a Linux terminal. There is no single "CEO of Email" who can decide to shut down the network or force everyone to use the same app.

On the other hand, instant messaging networks are built on centralised designs. As a result, the Slacks and WhatsApps of the world are essentially walled gardens. One cannot send a WhatsApp message to a Slack channel (well, technically it can be done, but these networks were not built with to interact that way). As a consequence, the companies running these services have then come to own the entire stack: the protocol, the server, and the interface. From a user experience perspective, Slack is often "better" than email. It’s faster, has better emoji reactions, and feels more cohesive. Why? Because centralisation allows for rapid iteration. Slack can push a new feature to every user overnight. Email, being decentralised, takes decades to change because thousands of independent providers have to agree on new standards. As a developer, this is the ultimate "quality vs. sovereignty" trade-off. We often sacrifice the freedom of decentralised protocols (email) for the polished, high-velocity features of centralised products (Slack).

What are the best use cases for blockchains then?

If business naturally trends toward centralisation for the sake of efficiency, then the best use cases for blockchains are those where trust is more valuable than efficiency. These are some of the examples I can think of:

  • Voting systems. In a centralized system, the "admin" can modify the database. A blockchain-based system, where any citizen can potentially spin up a node to validate election results, makes the voting process "censorship-resistant" and "auditable," where the cost of a slow tally is worth the guarantee that the result is tamper-proof.

  • Philantropy. Donors often distrust how funds are allocated. Smart contracts can ensure that funds are only released when specific, verifiable milestones are met, providing "programmatic transparency" that a traditional charity cannot match.

  • Asset registries (land, IP, carbon credits). These require a "permanent memory" that outlasts any single government or corporation. By using a blockchain, one ensures that a title deed or carbon credit cannot be double-counted or deleted by a corrupt official.

Conclusion

Technical decentralization and business decentralization are distinct forces. While we often view decentralization as a "silver bullet" for freedom, we must acknowledge that centralization is an efficiency feature. Especially in the business world, centralization enables the specialization and rapid scaling that form the foundation of our economy. The challenge for the current generation of developers isn't to decentralize everything, but to identify the specific areas where sovereignty is worth the inefficiency tax.

Challenges in refrigeration monitoring

· 2 min read

Industrial and Commercial refrigeration is a $100b+ industry. Customers are generally large, deep-pocketed companies with regular budget cycles that, given the right incentives, should have no problem adopting new technologies. However, while waves of innovation like Industry 4.0 and programs such as Horizon 2020 had profound effects on other sectors, the refrigeration sector remained mostly untouched.

Remote monitoring

The most classic case of this disconnec is remote monitoring. In fact, when trends like the Internet of Things gained popularity by promising a future where every device would be connected to the cloud, the main doubts were about the associated costs. In fact, no matter how cheap data bundles and hardware get, not all devices need to be connected to the internet. on the other hand, industrial and commercial refrigeration, being a comparatively "wealthy" sector, should not have much trouble covering the cost of bringing online massive pieces of equipment that. This should be especially true in a sector where downtime usually corresponds to non-negligible losses. Yet, it's 2022, and those markets that are so quickly adopting automatic checkouts have done little to collect data about the efficiency of their machinery. Data produced by commercial and industrial machinery is still tragically hard to retrieve. Is it really just about costs then?

Refrigeration machine in a supermarket

The one-time-fee curse

The core tension is really about payment structures. Medium and large businesses have a strong incentive to be as independent as possible and own as much as possible of the vertical. This clashes with the recurring payment structure that is typical of the monitoring world.

Hardware platforms and lock-in

Another problem is about the fact that component manufacturers (compressors, inverters, PLC systems) have a strong incentive to try

Will open monitoring ever become mainstream in this sector?