Skip to main content

Starthub - trends in OSS deployments

· 6 min read

As a part of my work on Starthub, I am having to think of a go-to-market strategy. However, given that Starthub is essentially a platform to deploy cloud architectures through generic actions, I would first need to build the some useful actions one by one. This might end up costing a lot in terms of time and effort, because I have no clear idea of what kind of actions would be actually interesting to users. Would developers prefere a simple action that only makes an API call to an API, or would they rather have an action that can deploy a fleet-tracking system in one command? Given that an action could be pretty much anything, I probably need to focus on action bringing a lot of value in a single transaction. Generally this means targeting the deployment of entire cloud architectures then, which take years to develop and built value into. Are there any "ready-made" architectures where I might know, for a fact, that my users would find useful? Open source project could be the way to go. In fact, many open source projects are already have a loyal user base, an already a well-defined scope and are painful to deploy. Instead of building full-fledged applications myself, I could just focus on making it simpler to deploy existing ones.

Picking an open source project

There are thousands of open source projects out there. How to pick one? If we think of the pain of deployment as a crater, then the bigger the crater, the greater the value added by Starthub. In this scenario, the crater for an open source project will be defined by:

  • Width, which is its popularity. This is measured as the "number of stars on Github" (I know this is a debatable metric, but it's an acceptable proxy).
  • Depth, which is the complexity of its deployment. This is measured as the number of cloud primitives needed for the project to be deployed.

It does not take a lot of searching to find out that the biggest "craters" are represented by those open-source projects that have been gaining massive popularity lately and are composed of a lot of primitives, like Supabase (90k+ Github stars) and n8n (160k+ Github stars). Let's focus on n8n, the most popular one. If we look at the docs, there are several solutions to deploy the simplest, single-node configuration of n8n. That's great for small and hobby deployments, but not for production-grade, horizontally scalable deployments. In fact, the docs also offer documentation about queue mode, which allows a much more robust, scalable configuration. Now, while it's great that n8n also takes into account the needs of power users, the documentation is quite lengthy to read, and, like it often happens, it leaves a lot of guess work up to the user. Could this be a good wedge to showcase the power of Starthub?

While reading through forums, I have come across a radically different approach to deploying open source projects that has become increasingly popular lately: the Railway approach. I found Railway very interesting because it is based on different tradeoffs than what Starthub chose. In fact, Starthub tries to be as unopinionated as possible and hands over full control to the user. In this way, it ensures that any edge case can be potentially handled (think the creation of API keys through a web UI of a platform that offers no corresponding APIs). However, it also adds complexity and introduces the danger of state drift. On the other hand, Railway focuses on simplicity in exchange for freedom and security. In fact, Railway lets you deploy a production-grade configuration of n8n with a single click, but it assumes you are ok with never leaving its walled garden.

More incumbents

Several incumbents seem to have adopted a similar approach to Railway:

  • dev.sh, which is the open source version of Railway. Besides hosting your stack, dev.sh makes it possible to deploy the same stacks your own cloud, which basically turns it into a horizontal-first of Coolify or CapRover.

  • Dokploy, which leans heavily into the Docker Swarm ecosystem for horizontal scaling, offering a smoother path to multi-node clusters than many other self-hosted tools.

  • Elestio provides a full-Service experience for over 400+ open-source tools (like n8n, Supabase, or Plausible) while respecting a user's sovereignity.

  • Public cloud providers like Fly.io and DigitalOcean have been offering managed container solutions for a while now.

  • Even more alternative: Northflank, Render, Sliplane and others all have slightly different takes on this problem, but, since the focus of this article is about general trends, I will let them aside for now.

We are moving up the stack

What's the general trend then? While up until a few years ago we had to install open source projects and, more generally, applications manually on a VPS, we can now instantly install a horizontal configuration of said projects, with the ability of scaling each of the required components (db, vm, etc) manually. In a sense, a new layer seems to be emerging in the world of cloud computing, which solely focuses on the horizontal, scalable configuration of applications. What's interesting to notice is that the managed solutions mentioned above all rely, in a way or another, on containerisation.

Conclusion: where does Starthub stand?

Rather than focusing exclusively on deployments, Starthub is more of a swiss knife that can be used to perform arbitray tasks, among which deploying cloud architectures. It can be useful in cases where:

  • A user never wants credentials to be shared with a third party (the sovereignty wedge).

  • A user needs assurance of zero lock-in.

  • A user needs to deploy arhitectures across multiple cloud providers.

  • A user needs to automate manual configuration steps when deploying a cloud system.

Energy as a tool for social impact

· 5 min read

In the context of energy access, "PUE" stands for "Productive Use of Energy". It describes the use of energy to generate income, create value, or improve productivity. It is the holy grail of energy access and dream of every entrepreneur in the sector. This is because PUE tackles two different problems at once:

  • Financial viability. PUE creates a virtuous cycle for the utility or energy provider. By enabling income-generating activities, the provider shifts the customer base from low-consumption households to "anchor loads." These users consume more energy and, crucially, have a higher ability to pay because the energy itself is financing their bill. This reduces the risk of default and improves the payback period on infrastructure.

  • Poverty reduction. Unlike basic electrification — which provides comfort (lighting) but rarely changes a family's economic bracket — PUE addresses the productivity gap. By mechanizing labor-intensive tasks (like milling grain or pumping water), energy saves time and multiplies output. This transitions a community from subsistence to a surplus economy, creating local jobs that outlast the initial energy project.

However, PUE is surprisingly hard to achieve. In fact, if the "600 million people without access to energy" pitch helped companies like Sunking, Zola and similar solar startups get investor support to develop solutions for the residential market (lamps, TV sets, fans, phone chargers and more) most productive use cases remain mostly untouched. I keep wondering: if productive use cases turn energy into a platform on which customers generate income, it would not be unreasonable to think that they would make productive users better customers. In other words, wouldn't PUE customers likely be the best customers? Maybe, PUE is hard not because of a lack of incentives.

Making the pie bigger

If we visualize energy use as a pyramid, we can see a clear divide between convenience and transformation. At the broad base are 'light' applications: phone charging, LED bulbs, and home entertainment. These improve quality of life, but their economic impact is often capped. A shopkeeper might earn a few extra dollars by offering phone charging, but the 'size of the pie' remains largely the same. As we move up the pyramid into refrigeration and grain milling, we enter the realm of value-added processing. At the very peak, we find 'heavy' applications: well-digging, mechanized irrigation, and field plowing. This is where the magic happens. While lighting merely extends the hours of an existing economy, industrial-scale energy expands that economy. A community that moves from hand-tilling to mechanized irrigation doesn't just work longer; they produce more, waste less, and unlock the ability to sell processed goods rather than raw commodities. To truly 'make the pie bigger,' we must shift our focus from energy for consumption to energy for production.

Energy as a tool for impact

Is this really about energy?

Of course, a tractor cannot be powered through solar energy, and a well cannot be dug with a solar-powered excavator (yet). This makes me wonder then: is this really about energy, or rather about power? To move the needle on poverty, we don't just need kilowatt-hours spread over a day; we need the high-torque, high-wattage capability to move heavy machinery.

If we assume that "increasing the pie" is more about power than about energy, then the question becomes: is solar always the best way to achieve it? Distributed solar is excellent for efficiency and low-load electronics. But for the "top of the pyramid" tasks, the capital expenditure (CAPEX) for batteries and inverters capable of handling high surge currents is often prohibitive for the very communities that need them most.

Domain thinking

I am a big fan of domain thinking. When solving a problem in a given domain is hard, changing domains often improves one's odds to solve it. One of my favourite examples of this is Fourier Transforms in mathematics, where complex integrals can be turned into simple multiplications, which can then be solved much more easily. In our case, if we view PUE strictly as an energy problem, we get stuck on the physical constraints of hardware, which make it prohibitively expensive to solve any kind of problem in a context which is extremely price sensitive.

The financial domain

When productivity is seen with the eyes of an investor rather than an engineer, it becomes a financing problem. What if, instead of changing the technical specs of the tools available to the farmers, we just focus on making them more accessible? This can be done in several ways. Here are a couple

  • Rent-to-own. Much like the Solar Home Systems (SHS) revolution, applying lease-to-own models to high-torque machinery lowers the barrier to entry. The asset itself acts as the collateral, and the increased yield from the "power" pays for the "energy."

  • Cost-spreading. Companies like Hello Tractor offer on-demand plowing services to farmers. This shifts the burden from CAPEX (buying the tractor) to OPEX (paying for the service), allowing the "pie-growing" activity to happen without the user needing to solve the energy/power infrastructure problem themselves.

Conclusion

As engineers, we often have a tendency to see the problems around us from a technical point of you. If we stay stuck in the "energy domain," we keep trying to build cheaper batteries or smarter load-shedding algorithms. These are noble pursuits, but they are incremental. However, when we shift domains — into finance, logistics, or even community-based ownership models — we find that we can bypass technical bottlenecks entirely. We stop trying to make a solar panel act like a diesel generator and instead build a financial system that makes the "force" of a diesel generator affordable, or a sharing economy that makes it efficient.

Starthub: actions vs resources

· 5 min read

Over the past few months I have been thinking a lot about what a tool to deploy stacks of arbitrary complexity should look like. I have already written some blog posts about the rationale and best level of abstraction to implement it. So far, it looks like some kind of orchestration engine that can process Docker and WebAssembly steps. This would allow to package systems (frontend, backend, any kind really) of arbitrary complexity, potentially allowing to run UI-based configuration steps automatically. However, this is far from a complete model though, and many questions are still unanswered.

Are "packages" just cloud primitives?

What are "packages" really? Do they follow the same paradigm as tools like Terraform, where a unit corresponds to a resource? It does not look like that is the case. In fact, while some packages could represent the deployment of a cloud primitive, like virtual machine or a database, some others could very well be purely about configuration. For example, a step that logs into a user's Google Cloud Platform account and copies the credentials into the environment variables of a Vue app that is going to be deployed to Netlify would not map to a cloud primitive.

Instead, it sounds like we would need a model that is a little more agnostic, where an a "package" can be about deploying infrastructure, as well as configuring existing software and more; in other words, a model where a unit can perform any task a human being would be able to perform when deploying a new system. It sounds like we need an "action" based model. I like the word "action" because it is generic enough to apply to all the use cases while not being tied to any specific layer in the stack or cloud primitive-based paradigm. From now on, I will refer then to these packages as "actions".

How to maximise reuse of actions?

If we want the system to be minimally useful, we need to lower the barrier to entry as much as possible for developers to create new actions, or, better, reuse existing ones. Despite AI being great at generating code (I'll leave the discussion about code quality aside), a working action is much more than just a bundle of code. It's a tested bundle of code. By allowing users to build on top of actions created by other developers, leverage compounds, and that's where the value of a registry really lies.

Said leverage would be expressed through encapsulation, where arbitrary amounts of actions can be composed into a higher level, composite action. Just like in any other kind of registry system, dependencies would need to be resolved and fetched as a part of action execution. As a result, in theory we would be able to compose arbitrary amounts of actions into a single action, which in turn means not only that a set of actions can all be executed with a single command, but that those actions can also be reused across.

State drift

The main challenge with this model is state drift. Safety features like idempotency and state reconciliation guarantees would be hard to offer with the action model for two main reasons:

  • Because there is no one-to-one match between intent and result, the action model itself makes it hard to guarantee consistency.

  • Docker or WASM packages being agnostic means that it's impossible to make any assumption about the logic they are running, which means that the implementation of mechanisms to avoid state drift is completely left to the developer of every single action.

Conclusion

Shifting our mental model from resources to actions fundamentally redefines the goal of deployment. We move beyond managing the state of isolated cloud providers and start focusing on intent. By treating deployment as a series of composable, shareable tasks, we are building a collaborative language for software delivery — with Docker and WASM serving as the technical engine that makes this portable, executable paradigm possible.

Starthub - levels of abstraction

· 5 min read

Over the past weeks I have been spending a non-negligible amount of time working with Supabase, a platform for backend development that has recently exploded in popularity. Sitting on top of Postgres, Supabase uses classic SQL to dump and restore a database schema. While SQL dumps were conceived as a way to take a snapshot of a database, they can also be seen as a way to package the entire business logic of a company.

On the other hand, in one of my past blog posts I explored the idea of a top-to-bottom approach to building software, where an entire system could be deployed with a single command, independently of the amount and complexity of its components, and customised from an initial template. The question needs then to be asked: could SQL be the right tool to package entire systems, so they can be exchanged, traded and deployed in a standardised way? And, if not, what would such a tool look like?

Iteration 1: SQL

Let's start from the database approach mentioned above. SQL dumps only allow for packaging of the database schema. This means that we have a standard, safe way of packaging the storage layer of any business application. However, SQL does not help us with business logic or deployment configuration. For example, SQL does not allow us to deploy servers running custom business logic to a public cloud provider. Instead, we need a tool that also allows for the deployment of cloud architecture and business logic. In other words, lower-level tool.

Iteration 2: Infrastructure as Code

The IaC (Infrastructure As Code) industry already offers well-established tools to spin up entire cloud architectures. For examples, both Terraform and its open-source fork, OpenTofu not only can deploy entire cloud architectures, but also offer registries for developers to share modules and recipes.

Although a step forward with respect to the first iteration, deploying a cloud system of arbitrary complexity often entails steps of manual configuration. For example, if we wanted to deploy Chirpstack to a VPS and get it to communicate with a backend server of sorts via webhook, we would need to create an API token via the admin panel and paste it into the environment variables of a Nest JS server. It sounds like we need a escape hatch to perform manual configurations when an API is not availble.

Iteration 3: Docker

While Docker is generally used as a tool to package long-running applications, it could potentially be used in a "throwaway" fashion, where after deploying a cloud system, the container simply stops. This would be quite powerful, because it would allow to perform configuration steps of arbitrary complexity, even against third party systems that do not offer an API. For example, if, as a part of a deployment, we had to create manually an API key in a third party tool through its UI, or configure social login, a Magnitude-powered script running inside a throaway Docker container could log into the third party tool and perform the configuration automatically on behalf of the user. Now, although Docker would allow arbitrarily complex systems to be deployed. even the smallest Docker images take several MB. This is particularly inefficient for small, simple packages, where the code to make even a simple API call would end up taking several MB.

Iteration 4: WebAssembly

For small, specialized packages—such as a script designed to trigger a specific API call or verify a webhook—full container virtualization is often overkill. This is where WebAssembly (WASM) offers a middle ground: process-level virtualization. By targeting WASM, we can package configuration logic into binaries that are often only a few kilobytes in size. Unlike Docker, which requires a heavy daemon and a layered filesystem, WASM modules are platform-agnostic and execute at near-native speeds with near-instant startup times. This makes them the ideal "glue" for a deployment pipeline: they provide the strict isolation required to run untrusted third-party deployment scripts, without the multi-megabyte overhead of a Linux distribution. As more languages (Rust, Go, TypeScript via AssemblyScript) mature their WASM support, we move toward a world where the "package" for a cloud system is as lightweight as a SQL dump, but as capable as a full-blown installer.

Conclusion

By using SQL as our starting point, we’ve traced a path from simple data snapshots to a more ambitious vision of "executable infrastructure." We’ve moved across the spectrum of abstraction—from the rigid safety of the storage layer to the flexible, programmable environments of Docker and WebAssembly. The ideal package likely isn't a single technology, but a hybrid orchestration of WebAssembly for lightweight, instant-start configuration scripts, and Docker as the "escape hatch" for heavy, legacy, or complex edge cases. Together, they fulfill the top-to-bottom promise: an entire system that is as portable as a SQL dump, yet capable of self-assembling into a production-ready cloud environment with a single command.

IEC specifications for software

· 6 min read

Recently I have had to implement a token generation system to help customers buy energy in solar minigrids. The STS standard is widely used in Sub Saharan countries, and it specifies how tokens should be generated to control energy, gas and water meters. Most smart energy meters deployed in the Sub Saharan region follow IEC 62055-41. For the ones that do not know (I didn't), IEC specifications are international technical standards published by the IEC (International Electrotechnical Commission). They define how electrical, electronic, and digital systems should be designed, built, tested, and made interoperable. Reading such kind of specification for the first time made me wonder about why such specifications are needed, why they are not mainstream in the world of software and whether they could add value to it.

Do international standards exist in world of software?

Some IEC specifications themselves are actually about software. Here are some examples:

  • IEC 62304 is the international standard for medical device software life cycle processes.

  • ISO 26262 is a standard for functional safety in automotive software.

  • MISRA C are guidelines for the development of software in critical systems.

So, yes, standard specifications have existed in mission-critical software for a long time.

Could APIs be considered standards? Yes and no. Yes in the sense that they separate the "what" from the "how". When we call a third-party api, we only know what the api does, and do not really care how it achieves its goal. There are some attempts to standardise APIs themselves in some sectors, but they are specific to fields such as healthcare, where standards such as HL7/FHIR exist. No in the sense that there is no standardised solution for common use cases. For example, there is no standard for a bus ticketing platform, although it would work pretty much the same, anywhere in the world.

Lightweight software

While standards exist, even in the world of software, they do not seem to be apply to lightweight software, where "lightweight" refers to software where user lives are not necessarily at stake, but where the business case has a clear scope and standard solution. For example:

  • A bus ticketing platform. While every city in the world needs a way to issue and validate tickets, there is no universal "IEC-like" specification for the underlying software logic. Each vendor builds a proprietary system, making it difficult for cities to switch providers or integrate with third-party apps without expensive, custom-built adapters.

  • A ride-hailing platform. Despite the core functionality being identical — matching a driver with a rider, calculating a fare, and processing a payment — every company reinvented the wheel. Because no standard specification exists, developers are stuck building similar applications over and over again.

  • A LoRaWAN device monitoring platform. While the LoRaWAN protocol standardizes how data travels through the air, the software that monitors and visualizes that data is often rebuilt from scratch, even if the core value - collecting timeseries data from devices on the field - is common to many uses cases. A standard specification for the monitoring layer would allow users to swap their dashboarding software as easily as they swap a physical sensor.

In other words, specifications are not just useful, but necessary to allow technical ecosystems to scale.

Could we benefit from software standards being more widespread?

Probably. If we had a standard specification for the minimal functionality a platform should offer, some of the benefits would carry over:

  • Interoperability. Integration would move from a "custom build" to a "configuration" task. For instance, a Vue-based admin dashboard wouldn't need to be custom-coded for every new project; it could be built to expect the standard entities and state transitions of a "ticketing" or "monitoring" specification, making it instantly compatible with any compliant backend.

  • Decoupling. True decoupling is currently a myth in many projects because the frontend is often a mirror of a changing, proprietary backend API. A formal specification provides a "fontract-first" development environment where teams can work in total independence, knowing that as long as they meet the spec, the components will fit together perfectly.

  • Cost of development. Specifications turn "unknowns" into "knowns." When a project is broken into standardized blocks, stakeholders can more accurately judge whether to buy or build. Instead of estimating a vague feature, they are assessing the implementation of a defined standard, significantly reducing the "discovery" phase of development.

  • Best practices. Instead of a thousand developers solving the same edge cases in isolation, a common specification allows the global community to refine a single, optimal logic. Much like how the security community improves OAuth, a "ride-hailing spec" would benefit from the collective wisdom of developers worldwide, baking edge-case handling and security directly into the standard.

Why is this not happening then?

I can think of two main reasons:

  • Software is "cheap". Not in the sense that the time taken to make it is cheap, but because of its immaterial nature, which makes improvisation much more acceptable than in the world of hardware. When hardware is involved, a list of components with costs, lead times, and replacements is needed because the cost of an physical error is terminal.

  • Standards mean overhead. Implementing a rigorous specification requires a significant upfront investment in "reading and understanding" before a single line of code is written. In a world driven by minimum viable products and venture capital, the pressure to ship features often outweighs the long-term benefits of standardization. Most startups would rather be first to market with a messy, proprietary API than be second to market with a perfectly compliant IEC-standardized one.

  • Software bugs are, in many cases, solvable in production. Because we can "hotfix" a bug and deploy it in seconds, the culture of software development has shifted away from exhaustive pre-planning. In hardware or civil engineering, you cannot "patch" a bridge once it’s built; in software, we have grown comfortable with the "move fast and Bbreak things" philosophy because the cost of a mistake is rarely permanent.

  • Lock-in incentives. Proprietary "specifications" are often a business feature, not a bug. When a team builds their own unique solution for a common problem, they create implicit vendor lock-in. By avoiding open standards, companies make it technically difficult and financially expensive for customers to migrate to a competitor. In this landscape, standardization is seen as a threat to "moats" and customer retention, whereas in the world of hardware and utilities, the IEC approach recognizes that the ecosystem cannot scale if every component is a walled garden.

Conclusion

In conclusion, while the idea of universal software standards is logically sound, the reality of the software industry often creates a "rational" resistance to them. In many sectors, the pressure to standardize simply isn't high enough to overcome the massive gravitational pull of speed, profit, and competition.

IKEA for software

· 7 min read

The core product of the company I currently work for is a software platform to manage solar mini-grids. From a user's perspective, the platform consists of an admin panel and a suite of mobile-first web apps that our customers use to monitor and control solar mini-grids. It is a fairly standard IoT system: a storage layer (Postgres, Timescale and S3), a view layer (a set of Vue web apps) and a business logic layer, consisting of a set of NestJS services for timeseries data and financial transaction processing. The schema is also nothing new: a set of tables to track the state of energy meters, process inbound and outbound commands from and to devices on the field and some RBAC tables are some of the center pieces.

Yet, it took us about a year to reach a production-grade setup. RLS rules, social login, embedding of a Grafana charts in the frontend, setting up a timeseries database and a double-entry ledger for transaction processing took us months to get right. During that year, I often wondered about a faster, simpler way to get started from a ready-made, use-case-focused template instead of rebuilding everything, once again, from scratch. After all, our system was not substantially different from, say, one of those fleet-tracking systems that are commonly offered in the market.

Available options

These options come currently to mind:

  • Buy a closed-source, ready-made system. However, there seemed to be no marketplace that sold systems that we could fully own and customise according to our needs. At best, there are marketplaces for UI components, blocks or entire frontends, which are hard to customise against a specific database schema.

  • Use low-code platforms (eg Bubble). These tools can take you suprisingly far, as long as the user remains within their walled garden and stick to their way of doing things. Since they have an incentive to lock users in, they also make it difficult to export applications built with their tools, and the result is more often than not incompatible with any other platform.

  • Leverage database wrappers (eg Supabase, Firebase). However, they only solve the backend part of the problem, that is, they do not offer out-of-the-box UIs.

  • AI. Great to write boilerplate code and ship fast, but it does not remove the pain of having a battle-tested system ready for use.

Software is still largely artisanal

Despite the trillions of dollars flowing through the industry, software development remains a largely artisanal craft. We treat every new project like a piece of bespoke furniture, meticulously hand-carving the same legs, joints, and frames that have been built a million times before. We take pride in our "hand-coded" auth flows and "custom-built" data pipelines, ignoring the fact that, to the end user, these are merely invisible utilities. In any other mature industry — from automotive to construction and furniture making — this level of repetition would be seen as an economic failure. We are currently in a pre-industrial era of code, where we lack the standardized "blueprints" that would allow us to skip the foundational labor and move straight to the architectural finishings that actually define a product’s value.

Top to bottom

All the solutions above but one seem to focus on a bottom-to-top approach, where value is added by speeding up the composition of a system from the bottom up. I am wondering then, what about a top-to-bottom approach instead? If a company's goal is to develop a platform that resembles existing platform, and software is non-tangible (ie, because of its immaterial nature it can always be customised), would it not make sense to start from the top, work one's way down to the highest possible layer, and customise the template from there?

Why is this not happening? Here are some of the reasons I can think of:

  • Systems are just too different from each other. Yes, many of them need auth, RLS, etc, but each system has its own.

  • Build-from-scratch syndrome. Developers have a tendency to want to reinvent the wheel for the sake of engineering. Building a system from scratch also has a hidden advantage, which is guaranteeing full control over one's product. In fact, when building a system (code, infrastructure), a developer is implicitly mastering the use and management of the system itself. That knowledge becomes fundamental later on, when wanting to extend it, scale it or fix it. I have some doubts about this point though; in fact, the same argument can be made about AI, which, while writing the code itself, is also eroding a developer's knowledge of their own system.

  • Lack of good documentation. A full-fledged system would need to be extremely well-documented in order for a software team to adopt it.

The IKEA analogy

If the software wold really still is in its artisanal stage, then the question has to be asked: "could we have an "IKEA of software"? The idea has been floating around in some form for another for a long time. In the case below, "IKEA" blocks are intended in the form of click-to-configure solutions (Supabase-style apis come to mind).

In this blog, the "IKEA approach" is instead referring to a higher level in the stack, where the starting point is actually the top of the stack, rather than the bottom. Is there any room for an IKEA of software, where developers can share package their code in a standardised way, and users can then leverage these "packages" to deploy preconfigured cloud applications with one command? It could make sense for a few reasons:

  • Most real world business cloud applications are fairly well-defined. A fleet-tracking system, a bus-ticketing platform, an IoT platform. Of course, each system has its own quirks, but it is often cheaper to modify a pre-existing system rather than building a completely new system.

  • Most software developers, agencies or software houses have unused software lying around that was used at some point to solve real-world problems. Being able to package their software a have passive income from it would benefit them.

  • Developers generally do not enjoy the process of setting up yet-another-user-management system. A painful, repeatable problem would be solved in a standardised way.

  • Since cloud software is packaged in a standardised way, developers would need to spend less time reading documentation.

  • Software would be decoupled from its creator, very much in the same way writing decouples information from the writer or video decouples content creators from their content.

Conclusion

As we look at the current landscape, it is clear where the momentum lies. From the rise of database-as-a-service to the explosion of AI-powered autocomplete, the software industry is doubling down on bottom-to-top efficiency. We are getting incredibly good at generating the individual bricks and mortar with the click of a button or a well-crafted prompt. Yet, having the bricks doesn't mean the house is built, and it certainly doesn't mean the foundation is battle-tested. If we continue to focus solely on speeding up the composition of the bottom layers, we are still leaving the hardest part — the orchestration of a production-grade system — to be solved from scratch every single time. This leads to the core question: while all our current tools, including AI, are implicitly focusing on the bottom-to-top, would there be more value in focusing on the opposite? Perhaps the future of software isn't in helping us write the code faster, but in providing the "flat-pack" systems that allow us to start at the top and only dive into the depths when we truly need to innovate.

Google Pay for Venezuela

· 3 min read

I am currently in Venezuela, visiting a dear friend. While I fell in love with this country, I had the chance to also witness the systemic inefficiencies of this place. Some are tragic, like the lack of electricity, basic healthcare services and safety, while others are much more mundane (even if still annoying), like the limited choice of products at the supermarket or the lack in digital payment solutions. So I started wondering: what could a Google Pay for Venezuela look like?

Landscape

The incumbents that I am aware in the Venezuelan fintech sector are:

  • Pago Movil, for which users need access to a local bank account. Besides that, the UX is often clunky, and it’s tied to the bolívar (VES), which suffers from hyperinflation. It is also worth mentioning that in Venezuela, people often use USD for stability but Pago Movil for liquidity. Yapago would aim for a middle ground, since its ledger would track USD rather than VES.

  • Zinli, for which users need a foreign bank account. While popular, Zinli is effectively a dollarized wallet backed by Banco Mercantil Panama. It leverages US-based protocols like ACH (Automated Clearing House) network and Visa’s rails through Zelle, which means that it's essentially a solution for the "connected" elite or those with family abroad.

  • Binance, which is a generic solution focused on crypto transactions.

It seems that there might be room for a simple, local platform to exchange funds.

What a solution could look like

Some of my experience in building transaction systems for solar mini-grids in Sub Saharan Africa could be useful. In fact, it sounds like a double entry ledger could help tackle the problem. An MVP would consist exclusively of a fund transfer feature:

  1. The sender scans the QR code of the recipient or insert their handle into the mobile app.

  2. Both sender and receiver are notified of the transaction.

The on-ramp problem

Great, so we figured out the basics of how to get users to send funds to each other. However, such system is only useful if it allows users to freely move money in and out of the system. In other words: how would users convert their funds into fiat and viceversa? Here is where things get tricky. In fact, while implementing the transaction engine is fairly straight forward, creating a bridge between this system and the local banking system is a compliance problem.

  • We could build integrations with the Stripes of the world to issue payouts to our users. However, as a part of the sign up process, payment processors ask their business users to provide documentation that, at the moment, is pretty much impossible to obtain from the Venezuelan Ministry of Economy and Finances.

  • We could build a network of wakalas similar to the one Safaricom developed in Eastern Africa with Mpesa. However, building such a network would require massive amounts of funding and years of work. The political and economic instability of the country would probably make it extra hard to raise funding.

Yapago

Conclusion

At the moment it looks like, with my resources, connections and commitments, the best I can do is a wrapper around current solutions (eg Binance). This would remove the core value of my solution while making it overly dependent on incumbent services, which could easily eliminate my solution by implementing a new lightweight interface.

Google Maps for events

· 10 min read

Every few months, a new "events" app appears on Hacker News. Whether it’s music concerts, sports, art shows, or local meetups, every developer has—at some point—dreamed of a unified map showing everything happening in their city. The appeal is obvious, especially since I have often felt the need for such app first hand:

  • As a frequent traveller, I often find myself in unfamiliar contexts, and would like to know what's happening around me.

  • I am not on social media, nor am I thrilled at the idea of being exposed to mountains of social media slop just to know what's happening around me.

  • Like an increasing amount of people, I am looking for opportunities to meet new friends offline. An events app would help me navigate offline opportunities.

Yet, the domains of those apps that I see periodically launch on Hacker News inevitably become vacant after a year, and we are back to square one. Why don't we have a "Google Maps for events" or "Netflix for events" yet?

The landscape

There are several solutions that currently sit at the edge of an "index of events":

  • Eventbrite, Meetup, Luma. These services are specialised on social events, and serve as platforms to help organisers schedule, promote and manage events, while helping attendees discover and rsvp. They are probably the closest thing to what a "Google Maps for Events" should look like. By only focusing on social gatherings, they gloss over all those events that are social, but not peer organised (eg a music concert).

  • Social media (including Facebook Events). This is probably the biggest one. Whatsapp groups, Instagram accounts hold massive amounts of information about what is happening and where. However, their incentive to expose users to as much content as possible defeats the very attempt of getting offline mentioned above.

  • Songkick and similar music concert apps are fairly popular in the music world. However, since they are only focused on music events, they cast a net that is not wide enough to capture mainstream interest.

  • Ticketing platforms. Ticketmaster only promote events they manage.

  • City official web sites. Berlin is a great example. However, these are fragmented, often publicly-maintained web sites that only report geographically specific data.

  • Location-specific newsletters and blogs. In expat and nomad hotspots like Berlin, London or Madrid, niche services like Social Butterfly radar are common. However, just like city web sites, they are fragmented and hyperlocalised.

  • Fever and similar services are positioned as a commercial product rather than an index. In fact, while Google Maps feels like a free index (I'd say on a similar level to Wikipedia), Fever feels like a closed, commercial system, where non-monetizable events are not reported.

  • Airbnb experiences, Tripadvisor Activities and similar services all offer custom, on demand experiences that are not exactly events.

Why hasn't Big Tech solved this?

Somehow, big tech seems to have avoided this problem so far. Google is probably the firm that would be best positioned to tackle this kind of problem. In fact, event data could be presented either as a feature in their maps product, in the form of smart cards in the search engine or as a product of its own.

I can think of a few reasons why this kind of product does not exist yet:

  • It does not pass the toothbrush test. However, one can argue that many of Google Products do not pass that test either (think of Google Classroom).

  • It is not profitable enough. However, Google seems to have shipped many products in the name of collecting data about its users.

  • Data decay. Unlike information about businesses on Google Maps, information about events changes too quickly. However, products like Google Travel also expose information that changes quickly.

  • Businesses on Google Maps want to be found by definition. This is not necessarily true for events, where exclusivity is sometimes a feature.

What about Event Schema?

Google already provides a standardized "language" for them through Event Schema. This structured data allows any website — from a tiny local pub to a global giant like Ticketmaster — to tell Google exactly what an event is, where it’s happening, and how much a ticket costs.

The integration works through two primary architectural patterns:

  • The pull mechanism (crawl-based). This is the most common path. Developers embed JSON-LD scripts directly into their HTML. Google’s crawlers (Googlebot) identify these scripts during their routine rounds, parsing the data into a "Rich Result." However, this relies on crawl frequency; if an event is canceled or a time changes, there can be a lag of hours or days before the Search result reflects the reality on the ground.

  • The push mechanism (indexing API). For high-volume or time-sensitive platforms, Google offers the Indexing API. This allows organizers to "push" updates to Google in near real-time. This is the gold standard for accuracy, ensuring that if a game is postponed due to rain, the information is updated in the search index within minutes rather than days.

The structured data gap

While Event Schema is a powerful tool, it hasn't solved the "events index app" problem for one simple reason: implementation is voluntary. A massive amount of local culture happens in the "dark matter" of the web. PDF flyers, Facebook images, or simple text on a church bulletin board. Until a system can reliably turn that unstructured "mess" into valid Schema, the index will always favor commercial entities with SEO budgets over the local community gatherings that many of us are actually looking for.

Why Google should do it

While the toothbrush test might be a hurdle, there are three compelling strategic reasons why Google is the only player that could—and should—build a universal events index.

  1. Owning Position Zero and the Answer Engine. Google’s search business is currently in a transition from a list of blue links to an AI-driven "answer engine". By building a dedicated platform, Google would no longer be a secondary consumer of fragmented data. They would become the primary source, ensuring they consistently hold position zero - the coveted top-of-page real estate. Also, a native platform allows Google to enforce data quality (standardized timezones, precise geocoding, and verified links) that currently plagues third-party SEO-driven event pages.

  2. Completing the "offline interest graph". Google knows what you search for (intent) and where you are (location), but "events" represent the missing link: coordinated social intent. Integrating events into Google Maps transforms it from a static directory of businesses into a living "pulse" of the city. If Google knows you’ve RSVP’d to a gallery opening, it can proactively trigger a whole ecosystem of services: a calendar entry, a waymo ride-hail suggestion, or a YouTube Music notification for the artist's latest track. This creates a "flywheel" effect where one event signal drives engagement across multiple Google products.

  3. Solving the "Social" Problem via Utility. Google has historically struggled with social networking (e.g., Google+). However, the next frontier of social isn't a digital feed; it's real-world coordination. Unlike a social network that requires you to post updates, an events platform is a high-value utility. By owning the discovery layer for where people meet offline, Google creates a social moat that doesn't rely on being a social media company. Also, in an era where Instagram and Facebook feeds are increasingly dominated by "slop" (algorithmic filler), a clean, utility-first discovery engine would win back users who are suffering from social media fatigue but still want to be social.

How to seed data into the platform?

As per the typical aggregator, an events service is only useful as long as it has plenty of good quality content in it. At the same time, because data is siloed and incumbents actively protect it (it's their moat), seeding the platform becomes the biggest problem. Here is what I can think of, besides what's already being done:

  • Long shelf-life content first. Manual seeding for events with long lead times, such as sports seasons, theater runs, and major festivals should be prioritised. These "low-hanging fruits" provide immediate SEO authority and give the platform utility months before the events actually occur.

  • User-generated data and gamification. Very much like it helped Google Maps and StackOverflow, gamification could create soft incentives for users to help seed data. For example, allowing users to post events and showing how many others have viewed a posted event would encourage them to participate more.

  • Aggregating "dark social" via Matrix. Much of the world's event data is locked in private messaging silos like WhatsApp and Telegram. By leveraging the Matrix protocol, we can bridge these disparate channels, allowing our platform to scan and structure event details from public community chats that are invisible to traditional crawlers.

  • Computer vision & physical signals:. We will bridge the gap between the physical and digital worlds by using OCR (Optical Character Recognition) to scan street-level imagery. By identifying banners, posters, and flyers—often the only source of truth for local community events—we can capture high-intent data that incumbents ignore (caveat: Street View imagery is only recorded every few years).

Challenges

Among the many questions still unanswered, these are the most interesting ones to me:

  • Event definition. What is an event exactly? Would a movie at the cinema be an event? Would a long-running art show in a gallery be an event? Do experiences count as events? This question is important, since the definition of "event" drastically changes the perception of the service.

  • Quality of data. Since much of the data would be generated by users, there is no mathematical certainty of its quality. Cross-checking can be used to mitigate the risk of invalid or fake data.

  • Event duplication. Aggregating information from different sources almost always involves a risk of duplication, which means that deduplication mechanisms are needed. The solution to this kind of problem should be within reach, since a combination of name, description, date and location of the event should allow to understand with a relatively high degree of certainty whether two items are actually referring to the same event.

Go-to-market strategy

Because its target audience is so vast, an event platform would be extremely hard to promote all at once. The most successful attempt I have seen so far is Luma, which focuses specifically on tech events. This seems to be a classic example of "finding a narrow edge and sticking to it" until the plaform is popular enough to allow for dilution into other verticals, such as sports and music events.

Conclusion

Much more could be written on this topic: from the use of agentic AI to crawl and process data, to the concept of a "pulse" of an event. Unfortunately, that would be too much for this humble post. I guess the main takeaway is - whether it is built by a tech giant looking to close the "offline intent" loop or a scrappy startup leveraging decentralized protocols and LLMs to parse the "dark web" of local flyers - the creator of this kind of service won't just be building a database. They will be building the interface for how we experience our cities. In an era of digital "slop," the most valuable map isn't the one that shows us where the buildings are—it’s the one that shows us where the people are.

Technical vs business decentralisation

· 6 min read

In the last few months I have had the chance to work from a coworking space focused on crypto projects. It has been an interesting experience, for several reasons. First, I was reminded, once again, how vast and differentiated the world of software actually is. While web development, CRUD or mobile apps make the news that reach me with the most frequency, there are hundreds, if not thousands, of verticals that I am completely unfamiliar with. Secondly, interacting with developers that work in the crypto space gave me a chance to think more deeply about ideas that I was already aware of, but that I had never really put serious thought into.

Decentralisation

One of the most frequent doubts I have is about the domain of space in which crypto is trying to give solutions: is it attempting to give a technical solution to a practical problem? Initially, it definitely looked so. In fact, while Bitcoin definitely made it easier to exchange currency in a with no middlemen, it is also true that most users would still want the backup option of being able to get human assistance when things go South (eg losing their credentials). However, the developers I met at the coworking space often mentioned how they were building new tools to allow for credential recovery and much more. It sounds like, as the ecosystem evolves, new, more user-friendly tools become available to users and lower the barrier to access to the crypto world. If this is the general trend, then, given enough time, decentralised tools should completely remove the need for a man in the middle. In this sense, while we maybe started out 15 years ago with a partial solution, we are now getting closer to a viable system that, besides being technically decentralised, also allows for business decentralisation. This brings about a question: do businesses actually want decentralisation?

Natural monopolies and conflicts of interest

The fewer options a customer enjoys, the more negotiating power a business has over them. That is, businesses has a natural incentive to decrease optionality in the market as much as possible. Silicon Valley famously encourages entrepreneurs to attempt to build monopolies. The question then follows: if businesses have a natural drive towards centralisation, does this not create a conflict of interest with the techical efforts for decentralisation in software?

Exhibit A: the Internet

A great example of technical vs business decentralisation is the Internet. Built on decentralised technology, it was designed to have no single points of failure. Protocols like TCP/IP, HTTP, and SMTP are essentially "public goods" — no one owns them, and anyone can use them to build a service.

However, as the Internet matured, business forces aggressively centralised services built on top of it. While the "pipes" remained decentralised, the business-related components - for example data - became massive silos. Companies like Google, Amazon, and Meta leveraged network effects to create digital monopolies. They offered convenience and "free" services in exchange for control over data and user attention. This suggests that technical decentralisation is a necessary but insufficient condition for a truly open ecosystem. Without a way to align economic incentives, the gravity of business will always pull back toward a central hub.

Exhibit B: email vs instant messaging

A similar dynamic played out with messaging protocols in the form of email and instant messaging (Slack, WhatsApp, Discord, Telegram and the likes). Email can effectively be considered a "decentralised" protocol: no single individual or company owns email. Because it is built on open standards (SMTP, IMAP), a Gmail user can message a ProtonMail user, who can then read that message on an iPhone mail client or a Linux terminal. There is no single "CEO of Email" who can decide to shut down the network or force everyone to use the same app.

On the other hand, instant messaging networks are built on centralised designs. As a result, the Slacks and WhatsApps of the world are essentially walled gardens. One cannot send a WhatsApp message to a Slack channel (well, technically it can be done, but these networks were not built with to interact that way). As a consequence, the companies running these services have then come to own the entire stack: the protocol, the server, and the interface. From a user experience perspective, Slack is often "better" than email. It’s faster, has better emoji reactions, and feels more cohesive. Why? Because centralisation allows for rapid iteration. Slack can push a new feature to every user overnight. Email, being decentralised, takes decades to change because thousands of independent providers have to agree on new standards. As a developer, this is the ultimate "quality vs. sovereignty" trade-off. We often sacrifice the freedom of decentralised protocols (email) for the polished, high-velocity features of centralised products (Slack).

What are the best use cases for blockchains then?

If business naturally trends toward centralisation for the sake of efficiency, then the best use cases for blockchains are those where trust is more valuable than efficiency. These are some of the examples I can think of:

  • Voting systems. In a centralized system, the "admin" can modify the database. A blockchain-based system, where any citizen can potentially spin up a node to validate election results, makes the voting process "censorship-resistant" and "auditable," where the cost of a slow tally is worth the guarantee that the result is tamper-proof.

  • Philantropy. Donors often distrust how funds are allocated. Smart contracts can ensure that funds are only released when specific, verifiable milestones are met, providing "programmatic transparency" that a traditional charity cannot match.

  • Asset registries (land, IP, carbon credits). These require a "permanent memory" that outlasts any single government or corporation. By using a blockchain, one ensures that a title deed or carbon credit cannot be double-counted or deleted by a corrupt official.

Conclusion

Technical decentralization and business decentralization are distinct forces. While we often view decentralization as a "silver bullet" for freedom, we must acknowledge that centralization is an efficiency feature. Especially in the business world, centralization enables the specialization and rapid scaling that form the foundation of our economy. The challenge for the current generation of developers isn't to decentralize everything, but to identify the specific areas where sovereignty is worth the inefficiency tax.

A fresh take on offline data collection

· 4 min read

Working in the solar sector in Tanzania taught me a hard truth: data collection in the field is a battle of attrition. Between spotty 2G connections, rugged hardware, and the sheer complexity of relational data (linking customers to assets to maintenance logs), the tools we use can either be our greatest ally or our biggest bottleneck.

For years, the industry has relied on the "Old Guard":

  • Epicollect: Great for simplicity and the fact that it's free (entire companies were built on top of it, so a big thank you to Imperial College), but closed-source. This means it cannot be easily customized. The consequence is that features like webhooks—essential for modern automation—are still unavailable.
  • ODK (Open Data Kit): The gold standard for many, but reliant on the aging XML/XLSForm standard, which can feel restrictive for complex app logic.
  • KoboToolbox: Popular for humanitarian surveys and rapid deployments, but customization beyond core forms can be limiting.
  • CommCare: Very powerful, but primarily suited for large-scale enterprise clients with significant configuration budgets.

Field data collection: Can we do more with less?

While these tools brought incredible value, technology has moved on. We can now build custom, high-performance field tools with a fraction of the previous engineering effort. Modern solutions should meet these criteria:

  • Lower barrier to entry: Easy to modify for developers in emerging markets using a unified, popular language stack.
  • Universal compatibility: Natively compatible with as many platforms as possible (Android, iOS, Web).
  • Enterprise-backed: Built on stable frameworks with long-term industry support.

Frontend: Flutter

In rural areas, your "office" is a smartphone. Flutter has changed the game by offering a single codebase that targets mobile, web, and desktop with production-grade stability.

  • Native Access: Since data often comes from QR code scans, Bluetooth (for solar inverters), or high-precision GPS, Flutter is an ideal choice because it can natively access device sensors without a "performance bridge."
  • Offline Excellence: Flutter allows for direct, native access to device storage (using tools like Isar or Hive), making it easier to cache thousands of records for offline use.
  • App Store/Play Store Ready: Unlike web-only tools, you can deploy a polished, installable app that works reliably in low-bandwidth environments.

Backend: TypeScript

TypeScript is now one of the most widespread programming languages globally. It is natively supported by modern edge function platforms and major backend frameworks. Using TypeScript across the stack means a developer can jump from UI logic to backend validation without the "mental tax" of switching languages.

Database: Postgres & PostGIS

Rural surveys are almost always location-focused.

  • Postgres provides the relational integrity needed to link complex hardware hierarchies (e.g., matching a specific battery serial number to a solar panel and a customer contract).
  • PostGIS turns your database into a geographic engine. It allows for advanced queries like: "Which solar arrays are within a 10km radius of the rising flood zone?" Standard JSON-based databases simply cannot compete with this spatial intelligence.

Storage: S3

Apps like ODK and Epicollect allow to upload images, audio files and more. S3 (or S3-compatible layers) provides virtually infinite, low-cost storage for these files, keeping the primary database lean and focused on searchable metadata.

Supabase & Convex

In recent years, platforms like Supabase and Convex have had an enormous impact on how we build. They allow a small team to deploy production-Grade infrastructure in hours.

  • No Vendor Lock-in: Supabase is built on open-source standards (Postgres, GoTrue, PostgREST). If you need to move, you can export your entire database and self-host. Convex has also recently open-sourced its backend, offering similar portability.
  • Standardized Auth & RBAC: They offer built-in, secure solutions for social login and complex Role-Based Access Control (RBAC).
  • Row-Level Security (RLS): In Supabase, security is handled at the database level. You can write a single SQL policy to ensure a technician in Mwanza can only see their specific installations, making the entire system "secure by default."

Conclusion

While the "Old Guard" of data collection paved the way, the modularity of Flutter, TypeScript, and Postgres might be the future. For the solar, health, agro sectors, and for any industry working in the "last mile", this stack could be a way to do more with less, empowering local developers to build the tools their communities actually need.