Skip to main content

4 posts tagged with "starthub"

starthub

View All Tags

Starthub - trends in OSS deployments

· 6 min read

As a part of my work on Starthub, I am having to think of a go-to-market strategy. However, given that Starthub is essentially a platform to deploy cloud architectures through generic actions, I would first need to build the some useful actions one by one. This might end up costing a lot in terms of time and effort, because I have no clear idea of what kind of actions would be actually interesting to users. Would developers prefere a simple action that only makes an API call to an API, or would they rather have an action that can deploy a fleet-tracking system in one command? Given that an action could be pretty much anything, I probably need to focus on action bringing a lot of value in a single transaction. Generally this means targeting the deployment of entire cloud architectures then, which take years to develop and built value into. Are there any "ready-made" architectures where I might know, for a fact, that my users would find useful? Open source project could be the way to go. In fact, many open source projects are already have a loyal user base, an already a well-defined scope and are painful to deploy. Instead of building full-fledged applications myself, I could just focus on making it simpler to deploy existing ones.

Picking an open source project

There are thousands of open source projects out there. How to pick one? If we think of the pain of deployment as a crater, then the bigger the crater, the greater the value added by Starthub. In this scenario, the crater for an open source project will be defined by:

  • Width, which is its popularity. This is measured as the "number of stars on Github" (I know this is a debatable metric, but it's an acceptable proxy).
  • Depth, which is the complexity of its deployment. This is measured as the number of cloud primitives needed for the project to be deployed.

It does not take a lot of searching to find out that the biggest "craters" are represented by those open-source projects that have been gaining massive popularity lately and are composed of a lot of primitives, like Supabase (90k+ Github stars) and n8n (160k+ Github stars). Let's focus on n8n, the most popular one. If we look at the docs, there are several solutions to deploy the simplest, single-node configuration of n8n. That's great for small and hobby deployments, but not for production-grade, horizontally scalable deployments. In fact, the docs also offer documentation about queue mode, which allows a much more robust, scalable configuration. Now, while it's great that n8n also takes into account the needs of power users, the documentation is quite lengthy to read, and, like it often happens, it leaves a lot of guess work up to the user. Could this be a good wedge to showcase the power of Starthub?

While reading through forums, I have come across a radically different approach to deploying open source projects that has become increasingly popular lately: the Railway approach. I found Railway very interesting because it is based on different tradeoffs than what Starthub chose. In fact, Starthub tries to be as unopinionated as possible and hands over full control to the user. In this way, it ensures that any edge case can be potentially handled (think the creation of API keys through a web UI of a platform that offers no corresponding APIs). However, it also adds complexity and introduces the danger of state drift. On the other hand, Railway focuses on simplicity in exchange for freedom and security. In fact, Railway lets you deploy a production-grade configuration of n8n with a single click, but it assumes you are ok with never leaving its walled garden.

More incumbents

Several incumbents seem to have adopted a similar approach to Railway:

  • dev.sh, which is the open source version of Railway. Besides hosting your stack, dev.sh makes it possible to deploy the same stacks your own cloud, which basically turns it into a horizontal-first of Coolify or CapRover.

  • Dokploy, which leans heavily into the Docker Swarm ecosystem for horizontal scaling, offering a smoother path to multi-node clusters than many other self-hosted tools.

  • Elestio provides a full-Service experience for over 400+ open-source tools (like n8n, Supabase, or Plausible) while respecting a user's sovereignity.

  • Public cloud providers like Fly.io and DigitalOcean have been offering managed container solutions for a while now.

  • Even more alternative: Northflank, Render, Sliplane and others all have slightly different takes on this problem, but, since the focus of this article is about general trends, I will let them aside for now.

We are moving up the stack

What's the general trend then? While up until a few years ago we had to install open source projects and, more generally, applications manually on a VPS, we can now instantly install a horizontal configuration of said projects, with the ability of scaling each of the required components (db, vm, etc) manually. In a sense, a new layer seems to be emerging in the world of cloud computing, which solely focuses on the horizontal, scalable configuration of applications. What's interesting to notice is that the managed solutions mentioned above all rely, in a way or another, on containerisation.

Conclusion: where does Starthub stand?

Rather than focusing exclusively on deployments, Starthub is more of a swiss knife that can be used to perform arbitray tasks, among which deploying cloud architectures. It can be useful in cases where:

  • A user never wants credentials to be shared with a third party (the sovereignty wedge).

  • A user needs assurance of zero lock-in.

  • A user needs to deploy arhitectures across multiple cloud providers.

  • A user needs to automate manual configuration steps when deploying a cloud system.

Starthub: actions vs resources

· 5 min read

Over the past few months I have been thinking a lot about what a tool to deploy stacks of arbitrary complexity should look like. I have already written some blog posts about the rationale and best level of abstraction to implement it. So far, it looks like some kind of orchestration engine that can process Docker and WebAssembly steps. This would allow to package systems (frontend, backend, any kind really) of arbitrary complexity, potentially allowing to run UI-based configuration steps automatically. However, this is far from a complete model though, and many questions are still unanswered.

Are "packages" just cloud primitives?

What are "packages" really? Do they follow the same paradigm as tools like Terraform, where a unit corresponds to a resource? It does not look like that is the case. In fact, while some packages could represent the deployment of a cloud primitive, like virtual machine or a database, some others could very well be purely about configuration. For example, a step that logs into a user's Google Cloud Platform account and copies the credentials into the environment variables of a Vue app that is going to be deployed to Netlify would not map to a cloud primitive.

Instead, it sounds like we would need a model that is a little more agnostic, where an a "package" can be about deploying infrastructure, as well as configuring existing software and more; in other words, a model where a unit can perform any task a human being would be able to perform when deploying a new system. It sounds like we need an "action" based model. I like the word "action" because it is generic enough to apply to all the use cases while not being tied to any specific layer in the stack or cloud primitive-based paradigm. From now on, I will refer then to these packages as "actions".

How to maximise reuse of actions?

If we want the system to be minimally useful, we need to lower the barrier to entry as much as possible for developers to create new actions, or, better, reuse existing ones. Despite AI being great at generating code (I'll leave the discussion about code quality aside), a working action is much more than just a bundle of code. It's a tested bundle of code. By allowing users to build on top of actions created by other developers, leverage compounds, and that's where the value of a registry really lies.

Said leverage would be expressed through encapsulation, where arbitrary amounts of actions can be composed into a higher level, composite action. Just like in any other kind of registry system, dependencies would need to be resolved and fetched as a part of action execution. As a result, in theory we would be able to compose arbitrary amounts of actions into a single action, which in turn means not only that a set of actions can all be executed with a single command, but that those actions can also be reused across.

State drift

The main challenge with this model is state drift. Safety features like idempotency and state reconciliation guarantees would be hard to offer with the action model for two main reasons:

  • Because there is no one-to-one match between intent and result, the action model itself makes it hard to guarantee consistency.

  • Docker or WASM packages being agnostic means that it's impossible to make any assumption about the logic they are running, which means that the implementation of mechanisms to avoid state drift is completely left to the developer of every single action.

Conclusion

Shifting our mental model from resources to actions fundamentally redefines the goal of deployment. We move beyond managing the state of isolated cloud providers and start focusing on intent. By treating deployment as a series of composable, shareable tasks, we are building a collaborative language for software delivery — with Docker and WASM serving as the technical engine that makes this portable, executable paradigm possible.

Starthub - levels of abstraction

· 5 min read

Over the past weeks I have been spending a non-negligible amount of time working with Supabase, a platform for backend development that has recently exploded in popularity. Sitting on top of Postgres, Supabase uses classic SQL to dump and restore a database schema. While SQL dumps were conceived as a way to take a snapshot of a database, they can also be seen as a way to package the entire business logic of a company.

On the other hand, in one of my past blog posts I explored the idea of a top-to-bottom approach to building software, where an entire system could be deployed with a single command, independently of the amount and complexity of its components, and customised from an initial template. The question needs then to be asked: could SQL be the right tool to package entire systems, so they can be exchanged, traded and deployed in a standardised way? And, if not, what would such a tool look like?

Iteration 1: SQL

Let's start from the database approach mentioned above. SQL dumps only allow for packaging of the database schema. This means that we have a standard, safe way of packaging the storage layer of any business application. However, SQL does not help us with business logic or deployment configuration. For example, SQL does not allow us to deploy servers running custom business logic to a public cloud provider. Instead, we need a tool that also allows for the deployment of cloud architecture and business logic. In other words, lower-level tool.

Iteration 2: Infrastructure as Code

The IaC (Infrastructure As Code) industry already offers well-established tools to spin up entire cloud architectures. For examples, both Terraform and its open-source fork, OpenTofu not only can deploy entire cloud architectures, but also offer registries for developers to share modules and recipes.

Although a step forward with respect to the first iteration, deploying a cloud system of arbitrary complexity often entails steps of manual configuration. For example, if we wanted to deploy Chirpstack to a VPS and get it to communicate with a backend server of sorts via webhook, we would need to create an API token via the admin panel and paste it into the environment variables of a Nest JS server. It sounds like we need a escape hatch to perform manual configurations when an API is not availble.

Iteration 3: Docker

While Docker is generally used as a tool to package long-running applications, it could potentially be used in a "throwaway" fashion, where after deploying a cloud system, the container simply stops. This would be quite powerful, because it would allow to perform configuration steps of arbitrary complexity, even against third party systems that do not offer an API. For example, if, as a part of a deployment, we had to create manually an API key in a third party tool through its UI, or configure social login, a Magnitude-powered script running inside a throaway Docker container could log into the third party tool and perform the configuration automatically on behalf of the user. Now, although Docker would allow arbitrarily complex systems to be deployed. even the smallest Docker images take several MB. This is particularly inefficient for small, simple packages, where the code to make even a simple API call would end up taking several MB.

Iteration 4: WebAssembly

For small, specialized packages—such as a script designed to trigger a specific API call or verify a webhook—full container virtualization is often overkill. This is where WebAssembly (WASM) offers a middle ground: process-level virtualization. By targeting WASM, we can package configuration logic into binaries that are often only a few kilobytes in size. Unlike Docker, which requires a heavy daemon and a layered filesystem, WASM modules are platform-agnostic and execute at near-native speeds with near-instant startup times. This makes them the ideal "glue" for a deployment pipeline: they provide the strict isolation required to run untrusted third-party deployment scripts, without the multi-megabyte overhead of a Linux distribution. As more languages (Rust, Go, TypeScript via AssemblyScript) mature their WASM support, we move toward a world where the "package" for a cloud system is as lightweight as a SQL dump, but as capable as a full-blown installer.

Conclusion

By using SQL as our starting point, we’ve traced a path from simple data snapshots to a more ambitious vision of "executable infrastructure." We’ve moved across the spectrum of abstraction—from the rigid safety of the storage layer to the flexible, programmable environments of Docker and WebAssembly. The ideal package likely isn't a single technology, but a hybrid orchestration of WebAssembly for lightweight, instant-start configuration scripts, and Docker as the "escape hatch" for heavy, legacy, or complex edge cases. Together, they fulfill the top-to-bottom promise: an entire system that is as portable as a SQL dump, yet capable of self-assembling into a production-ready cloud environment with a single command.

IKEA for software

· 7 min read

The core product of the company I currently work for is a software platform to manage solar mini-grids. From a user's perspective, the platform consists of an admin panel and a suite of mobile-first web apps that our customers use to monitor and control solar mini-grids. It is a fairly standard IoT system: a storage layer (Postgres, Timescale and S3), a view layer (a set of Vue web apps) and a business logic layer, consisting of a set of NestJS services for timeseries data and financial transaction processing. The schema is also nothing new: a set of tables to track the state of energy meters, process inbound and outbound commands from and to devices on the field and some RBAC tables are some of the center pieces.

Yet, it took us about a year to reach a production-grade setup. RLS rules, social login, embedding of a Grafana charts in the frontend, setting up a timeseries database and a double-entry ledger for transaction processing took us months to get right. During that year, I often wondered about a faster, simpler way to get started from a ready-made, use-case-focused template instead of rebuilding everything, once again, from scratch. After all, our system was not substantially different from, say, one of those fleet-tracking systems that are commonly offered in the market.

Available options

These options come currently to mind:

  • Buy a closed-source, ready-made system. However, there seemed to be no marketplace that sold systems that we could fully own and customise according to our needs. At best, there are marketplaces for UI components, blocks or entire frontends, which are hard to customise against a specific database schema.

  • Use low-code platforms (eg Bubble). These tools can take you suprisingly far, as long as the user remains within their walled garden and stick to their way of doing things. Since they have an incentive to lock users in, they also make it difficult to export applications built with their tools, and the result is more often than not incompatible with any other platform.

  • Leverage database wrappers (eg Supabase, Firebase). However, they only solve the backend part of the problem, that is, they do not offer out-of-the-box UIs.

  • AI. Great to write boilerplate code and ship fast, but it does not remove the pain of having a battle-tested system ready for use.

Software is still largely artisanal

Despite the trillions of dollars flowing through the industry, software development remains a largely artisanal craft. We treat every new project like a piece of bespoke furniture, meticulously hand-carving the same legs, joints, and frames that have been built a million times before. We take pride in our "hand-coded" auth flows and "custom-built" data pipelines, ignoring the fact that, to the end user, these are merely invisible utilities. In any other mature industry — from automotive to construction and furniture making — this level of repetition would be seen as an economic failure. We are currently in a pre-industrial era of code, where we lack the standardized "blueprints" that would allow us to skip the foundational labor and move straight to the architectural finishings that actually define a product’s value.

Top to bottom

All the solutions above but one seem to focus on a bottom-to-top approach, where value is added by speeding up the composition of a system from the bottom up. I am wondering then, what about a top-to-bottom approach instead? If a company's goal is to develop a platform that resembles existing platform, and software is non-tangible (ie, because of its immaterial nature it can always be customised), would it not make sense to start from the top, work one's way down to the highest possible layer, and customise the template from there?

Why is this not happening? Here are some of the reasons I can think of:

  • Systems are just too different from each other. Yes, many of them need auth, RLS, etc, but each system has its own.

  • Build-from-scratch syndrome. Developers have a tendency to want to reinvent the wheel for the sake of engineering. Building a system from scratch also has a hidden advantage, which is guaranteeing full control over one's product. In fact, when building a system (code, infrastructure), a developer is implicitly mastering the use and management of the system itself. That knowledge becomes fundamental later on, when wanting to extend it, scale it or fix it. I have some doubts about this point though; in fact, the same argument can be made about AI, which, while writing the code itself, is also eroding a developer's knowledge of their own system.

  • Lack of good documentation. A full-fledged system would need to be extremely well-documented in order for a software team to adopt it.

The IKEA analogy

If the software wold really still is in its artisanal stage, then the question has to be asked: "could we have an "IKEA of software"? The idea has been floating around in some form for another for a long time. In the case below, "IKEA" blocks are intended in the form of click-to-configure solutions (Supabase-style apis come to mind).

In this blog, the "IKEA approach" is instead referring to a higher level in the stack, where the starting point is actually the top of the stack, rather than the bottom. Is there any room for an IKEA of software, where developers can share package their code in a standardised way, and users can then leverage these "packages" to deploy preconfigured cloud applications with one command? It could make sense for a few reasons:

  • Most real world business cloud applications are fairly well-defined. A fleet-tracking system, a bus-ticketing platform, an IoT platform. Of course, each system has its own quirks, but it is often cheaper to modify a pre-existing system rather than building a completely new system.

  • Most software developers, agencies or software houses have unused software lying around that was used at some point to solve real-world problems. Being able to package their software a have passive income from it would benefit them.

  • Developers generally do not enjoy the process of setting up yet-another-user-management system. A painful, repeatable problem would be solved in a standardised way.

  • Since cloud software is packaged in a standardised way, developers would need to spend less time reading documentation.

  • Software would be decoupled from its creator, very much in the same way writing decouples information from the writer or video decouples content creators from their content.

Conclusion

As we look at the current landscape, it is clear where the momentum lies. From the rise of database-as-a-service to the explosion of AI-powered autocomplete, the software industry is doubling down on bottom-to-top efficiency. We are getting incredibly good at generating the individual bricks and mortar with the click of a button or a well-crafted prompt. Yet, having the bricks doesn't mean the house is built, and it certainly doesn't mean the foundation is battle-tested. If we continue to focus solely on speeding up the composition of the bottom layers, we are still leaving the hardest part — the orchestration of a production-grade system — to be solved from scratch every single time. This leads to the core question: while all our current tools, including AI, are implicitly focusing on the bottom-to-top, would there be more value in focusing on the opposite? Perhaps the future of software isn't in helping us write the code faster, but in providing the "flat-pack" systems that allow us to start at the top and only dive into the depths when we truly need to innovate.