Hyper Open Edge Cloud

Why chose a Hyper Open Cloud?

Hyper Open Cloud protects your company's digital independence and trade secrets against vendor lock in at the core of conventional public clouds.
  • Last Update:2020-09-03
  • Version:011
  • Language:en


  • Windows 95
  • Public cloud dystopia in 2020
  • Why chose a Hyper Open Cloud?
  • Rapid.Space
  • Why chose Rapid.Space?

I will introduce today an idea which is probably new to many of you: Hyper Open Cloud. And I will try to convince you why should consider it as soon as posible for your company if you wish to protect your digital independence and your trade secrets.

In a first part, I'd like you to recall the story of Windows 95, how everyone thought it would eliminate all competition despite its numerous flaws and how it eventually failed to conquer the world because of a Free Software called Linux.

In a second part, I will show why Public clouds in 2020 have a lot in common with Windows 95 from 25 years ago. Most companies believe they should adopt them, despiste their numerous flaws. But a new idea called "Hyper Open Cloud" will most likely prevent them from conquering the world.

In a third part, I will explain how Hyper Open Cloud can bring the same benefits  to cloud services as Free Software does: the right to use, the right to copy, the right to study and the right to improve. They make your company more efficient. They create an incentive for conventional public clouds to fix their flaws.

I will then introduce in a fourth part Rapid.Space, the Linux of Hyper Open Cloud. It was created in 2009 and has been used in companies such as SANEF, PSA Group or the City of Munich.  It is available worlwide, inluding in China.

In conclusion, I will list the reasons for your company to adopt Rapid.Space. Besides being Hyper Open, Rapid.Space is also in many areas more capable than conventional public clouds thanks to its native support of distributed edge computing and radio access networks

Remember Windows 95

When Windows 95 and Windows NT 4.0 were released, all companies in the world started to switch their Macs and Unix workstations to Microsoft. There was a kind of consensus that Apple was financially dead and that the days of Unix were over. 

But, for many of us, the Blue Screen of Death (BSD) was the true signature of the Windows 95. It was displayed after each systsem crash, which happened quite often. 

What were the problems with Windows?

  • Expensive
  • Unavailable
  • Unreliable
  • Unsupported
  • Insecure
  • Incompatible
  • Lock in
  • Monopoly

Windows 95 had many flaws.

It was expensive: its license cost was $209.95. This is equivalent today to the price of a mid-range laptop or to 100 times the price of an Android license.

It was not available in countries such as China, Cuba, Iran, North Korea, etc. In China, Microsoft was still going through Taiwan to deliver Windows to mainland China with a 2-year delay. Bill Gates had not yet conquered China.

It was unreliable. I guess everyone noticed it.

It had poor support with service packs that sometimes introduced more bugs. Stable versions were then no longer available.

It was insecure, either because of poor design or because of the consequences of the Echelon programme, the ancestor to NSA's PRISM programme revealed by Edgar Snowden.

It was incompatible. By default, one could not read floppy disks from a Mac or a Unix workstation on a Windows 95 PC.

It created lock in. Windows 95 only supported Intel CPUs. It could not run on Apple's computers based on Power PC.

Also, Microsoft was acting increasingly as an agressive monopoly. In order to distribute Windows, PC makers had to sign a contract which de facto prevented them from selling competing operating systems. IBM's OS/2 slowly disappeared. 

Hijacking the World (1998)

Many people were quite upset with this situtation. Some wrote books, which sold very well, described accurately the tactics of Microsoft and the flaws of Windows.

Read for example "Hijacking the World" by Roberto Di Cosmo and Dominique Nora.

But this did not change Microsoft's behaviour.

US vs. Microsoft (1998)

The U.S. vs. Microsoft case concluded that Microsoft was a monopoly and should be dismantled.

This did not change Microsoft's behaviour either. Microsoft appealed and reverted the court's decision after a few years.

OSF/1 (1992)

There was also a group of big companies including DEC, IBM and HP trying to create a common interoperable system called OSF/1. After much debate, it never worked and was never adopted.

Free Software (1983)

  • right to use
  • right to copy
  • right to study
  • right to modify and redistribute

What changed the path of history is a philosophy created by Richard Stallman in 1983: Free Software. According to Richard Stallman, users should have the right to use, the right to copy, the right to study and the right to modify and redistribute the software they use. Those are fundamental rights.

This philosphy, which was also IBM's philosophy in its early days, goes directly against Microsoft's vision of market control through intectual property rights.

And as any good philosphy, it is stronger than any book, law or product.

SuSE Linux (1994) & Red Hat Linux (1995)

  • Affordable
  • Available
  • Reliable
  • Supported
  • Secure
  • Compatible
  • Open
  • Diverse

It took about 10 years for Free Software to turn into a product : Linux distributions.

The first Linux distributions were released by SuSE in 1994 and by Red Hat in 1995.

They were affordable: cheaper than Windows and downloadable for free.

Ther were available everywhere, including in China, Cuba, Iran, Noth Korea.

They were reliable. If one found a bug, he or she could fix it or let someone else fix it.

They were supported, in some cases for more than 10 years for companies such as Airbus.

They were secure thanks to code audits. Both American and Chinese governement audited independently the source code of Linux and created military grade distributions.

They were compatible with all filesystems: Mac, Windows, Unix, etc. And if a system was missing, you could add your own.

They were open with now more than 10 CPU architectures supported, including open source CPUs such as RISC1000 and RISC-V.

They are diverse with now dozens of Linux distributions serving different purposes and applications.

Linux and Free Software are now used in all companies. Thanks to Android, there are more devices in the world based on Linux than PCs running Windows. The majority of american schools uses Linux-based Chromebooks. 74.2% of servers facing the Internet are running Linux.

Windows is not dead, obviously. But it had to improve tremendously in order to keep up with Linux. And it even runs Linux nowadays.

Public cloud dystopia in 2020

History seems to repeat itself in 2020 with public clouds.

In the West, AWS (31%) and Azure (20%) are leading the public cloud market with GCP (6%) struggling behind.

In China, Alicloud (43%) is the leader of the market with Tencent (17%) and Baidu (9%) growing fast behind.

All public cloud providers are using similar tactics to compete against each other.

First, they provide free training and free service to users through startup programmes such as "Azure for Startups" or through global partnerships with system integrators such as ATOS and Google.

Users then develop applications based on proprietary cloud APIs they were trained to use.

After a year or two, cloud service is no longer free and actually becomes horrendously expensive. Some startups reveive a 100.000$ monthly bill just to run a couple of database transactions. But because APIs are proprietary, users are trapped in cloud jail. They can not repatriate their application. They can not move to another cloud.

Meanwhile, governments are watching this ubuesque drama and use all sorts of laws to break into your company's trade secrets: CLOUD Act in US, Cybersecurity Law in China, Loi Renseignement in France, and so on.

What is the problem with public clouds?

  • Expensive
  • Unavailable
  • Unreliable
  • Unsupported
  • Insecure
  • Incompatible
  • Lock in
  • Anti-competitive practices

Conventional public clouds in 2020 have many flaws.

They are expensive: about 10 times what it would cost to do the same by yourself.

Western clouds are mostly unavailable in China, Cuba, Iran, North Korea. 

They are often unreliable: some components provided by the APIs and the Platform as a Service (PaaS) are either outdated or flawed. There is no way to fix them by yourself. Support will tell you to wait for the next release for a fix in a few months, if you are lucky.

APIs are not supported over long period of time. They sometimes change, which forces users to change the code of their application.

Conventional public clouds are deeply insecure: due to extraterritorial justice, foreign governements have the right to break into your trade secrets.

They use mutually incompatible APIs and mutually incompatible services which are based on different binaries and source code versions.

They try to lock you in and prevent you from repatriating on premise or moving to another cloud.

They also sometimes abuse of their dominant position to kill competition. Alicloud did this to CDN providers in China. Azure's appraoch to bundle free cloud and free training to proprietary APIs might also fall in this category.

The age of surveillance capitalism (2018)

Slowly, authors are realizing the dangers of public clouds. Initially, books such as "The age of surveillance capitalism", written by Harvard Business School's professor Shoshana Zuboff, focused on how social networks and smartphones are a danger to our individual lives and democracies.

Le piège américain (2019)

In "Le piège américain", Frédéric Pierucci, former VP of Alstom, explains how his company was destroyed and dismantled by General Electrics with the support of NSA. He also explains how trade secrets collected by NSA on public clouds were used against Alstom and against him, and served as evidences to send him to jail.

It is only a matter of time before we see more stories and more books about how conventional public clouds are abusing their position to destroy companies. A company which relies on a conventional public cloud for its artificial intelligence or to distribute its applications can face at any time a sudden price increase by its cloud supplier and stop being profitable, without any means to oppose this. What happened to Fortnite in August 2019 should be a warning to all users of AWS, Azure or Alicloud. 

Google vs. EU (2010, 2016, 2020)

Public clouds are thus facing increased scrutiny by market regulators. Google has already faced three lawsuits from the European commission: in 2010, 2016 and 2020. 

None of the lawsuits have been able to curb their domination.

Gaia-X (2019)

France and Germany are now trying to gather large companies with the goal to create an interoperability and regulation framework for public clouds, Gaia-X, with a governance complexity reminiscent of OSF/1. One of Gaia-X's first decisions was to invite AWS, Azure and Google to contribute to the definition of the regulation. 21 of 22 founding members of Gaia-X heavily rely on cloud technologies made by Google, Microsoft, IBM, Huawei or Dell. If history repeats itself, Gaia-X may end up as OSF/1: a lot of resources wasted with no tangible results.

Freedom Box (2011)

In 2011, Eben Moglen of the Free Software movement had already foreseen the problems of conventional public clouds. He created a solution intended to replace a cloud with Free Software and Open Source Hardware: the Freedom Box. However, by focusing on the idea of eliminating the idea of a cloud, Moglen also failed to capture the economic benefits of it for entreprise in terms of economies of scale. Freedom Box is thus not a competitive solution for the enterprise market.

The same happened with numerous open source cloud softwares which emerged. I personnaly trust OpenSVC, Proxmox and SlapOS. I have seen people use them in different ways with great results. But they are just software, not a cloud service. And the entrerprise market needs cloud service because only a cloud service provides 24/7 operation of a complex distributed system at low cost, without having to dedicate a team of 5 engineers.

Hyper Open Cloud (2019)

  • Free Software
  • Open Source Hardware
  • Open Service

Just like Windows 25 years ago, books, lawsuits or consortia will not solve the problems of public cloud in 2020. Those problems are too deeply rooted in the nature of the digital economy with forces that can take down a government.

We also now understand why Free Software alone does not solve the problems of public cloud in 2020. It misses the operation management aspect which has to be wrapped around it in order to turn a cloud software into a cloud service. Moreover, some Free Software communities are simply hostile to cloud and thus focus on eliminating cloud instead of creating a better cloud.

Something really new is needed: a kind of philosophy for the cloud world. We call it "Cloud Libre" or "Hyper Open Cloud", a term coined in September 2019 by Tariq Krim for the first preview of Rapid.Space at the Open Compute Conference in Amsterdam.

Hyper Open Cloud builds on top of the ideas behind Free Software and Open Source Hardware. But instead of focusing on the software or the hardware, it focuses on the service itself. Hyper Open Cloud makes sure the that way the cloud service is operated and provided to the client is open and transparent.

This is called "open service".

Hyper Open Cloud is thus the combination of Free Software, Open Source Hardware and Open Service.

Open Service Free Software (2016)

  • right to use a service
  • right to reproduce copy a service
  • right to study  how a service is made
  • right to modify and provide redistribute a service

Open Service applies to the service industry the same ideas as Free Software to the software industry.

One of the first early examples of open service definition can be found in an executive order (2016-65) by French government introduced on january 29, 2016 in relation to public utility markets.

An open service provides more rights to the client.

The right to use a service without borders or discrimination.

The right to reproduce the service.

The right to study how the service is made.

The right to modify the service and provide the modified service to other users.

The idea of open service applies to cloud but also to any service: operating a restaurant, providing electricity, water distribution, etc.

The lock in problems that companies are facing with cloud providers are also not so different from the lock in problems that local governments face with utility providers such as water distribution companies. By keeping operation procedures and operation data secret, water distribution companies have been able to slow down market competition, slow down innovation and double water price.

Service lock in problems can be eliminated with open service. For cloud, for water distribution or for any service which becomes more efficient whenever service operation information is shared between competitors and clients.

Rapid.Space (2020) & more will join (2021)

  • Affordable
  • Available
  • Reliable
  • Supported
  • Secure
  • Compatible
  • Open
  • Sovereign federation

After a succesful preview in 2019, the Rapid.Space International company was formed in 2020. It is the first Hyper Open cloud provider.

More providers are expected to join the Hyper Open cloud movement in 2021.

Rapid.Space demonstrates all the potential benefits of Hyper Cloud.

It is affordable: 2 to 10 times cheaper than AWS.

It is available everywhere, even in mainland China. And nothing prevents Rapid.Space to be made available in Cuba, Iran or Noth Korea in one way or another.

It is reliable: if one finds a bug in service profiles, he or she could fix it or let someone else fix it.

It is supported with APIs that can last more than 10 years and service profiles that are not required to be upgraded.

It is secure through the possibility of code audit, hardware audit and operation audit.

It is compatible with any service users prefer. For example, Rapid.Space CDN does not force users to chose between Apache, NGINX or Caddy. It supports them all. And if one more is needed, users can add it.

It is open to any CPU target (x86, ARM, PowerPC, etc.) and it can be deployed and integrated with other cloud infrastructures (AWS, Azure, Alicloud, Hetzner, etc.).

And it is federated by relying on infrastructure owned by different companies in each country in order to achieve digital sovereignty.

Why chose a Hyper Open Cloud?

So, why should one use a Hyper Open Cloud?

The answer is simple: in order to avoid the trap of conventional clouds in 2020, a trap that will cause of lot of problems to your company, cost you a lot of money and maybe even cost you the control of your company.

By adopting Hyper Open Cloud immediately in addition to conventional public clouds, just like companies adopted Linux and Free Software in addition to Windows 25 years ago, you are protecting your future.

With Hyper Open Cloud, you are deploying in your company a different form of cloud service that will make your company much more competitive with a level of flexibility, sustainability and innovation that does not exist with conventional public clouds.

You are also sending a strong message to conventional public clouds. This strong message will make conventional public clouds improve their service and reduce their price.

Adopting Hyper Open Cloud is an instant win-win decision.

A quick win to complement what is missing already in conventional public clouds.

A quick win to prepare the next negociation round with conventional public clouds.

Mousetrap Vectors by Vecteezy

Aren't public clouds Open Source?

I often hear the question: "aren't conventional public cloud open source?".

If you are aware of Microsoft Azure's marketing, you surely know that some lobbyists working at Microsoft claim that it is possible to use Azure only with "open source technologies and open formats".

It is absolutely true that Microsoft Azure is partly based on open source software, including Linux itself. Microsoft is even the author of SONIC, the open source networking switch software that powers the Microsoft Global Cloud.

It is also true that Microsoft Azure uses a lot open source hardware. Many switches at Microsoft are based on open source hardware of the Open Compute Project. This type of hardware is manufactured by companies such as Accton or Delta in Taiwan.

But not everything is open source, neither at Microsoft nor at AWS, Google or Alicloud. 

What is not Open Source in current public clouds?

  • operation management software
  • operation management procedures
  • some patches to open source software

Three key items which are not open source in conventional public clouds are the operation management software, the operation management procedures and certain patches to open source software.

The operation management software is the software that automates the delivery of cloud services: provisionning, configuration, orchestration, billing, monitoring, self-healing, disaster recovery, etc. And the operation management procedures are the procedures that engineers and technicians should strictly follow for all aspects of a cloud service which can not be automated.

Free Software and Open Source Hardware without Open Source Operation Management is the same as eggs, ham and pasta without the precise recipe to cook them and the management procedures of the restaurant serving carbonara. Even if you hire an expensive chef, the taste will be different. And if you do not know how to manage a restaurant, you won't be able to serve any customer.

Operation management is thus the core of the secret know-how of conventional public clouds. Only a single open source software covers all aspects of operation management: SlapOS. And until the Rapid.Space handbook was released, all operation management procedures were secret.

In addition to operation management, some conventional public clouds apply patches to open source software and keep them secret. This also prevents repatriating or porting cloud services.

How is Hyper Open Implemented?

  • Free Software
  • Open Source Hardware
  • Open Source Operation Management
  • Public audits
  • Contributors
  • Zero Knowledge orchestration

Let us now understand how Hyper Open Cloud is implemented in practice.

Like many cloud services, it uses Free Software and Open Source Hardware. But unlike many cloud services, it uses only Free Software and Open Source Hardware.

All its operation management is open source: software and handbook of procedures.

Users can request an audit at any time, which is then published. 

In some Hyper Open Cloud, users can contribute their own infrastructure to create a public point of presence or a private point of presence. They can also contribute custom service profiles which extend the default service offer.

Also, in some Hyper Open Cloud, the cloud provider does not store user passwords. This is called Zero Knowledge, a technology which can guarantee sovereignty everywhere, even if some points of presence are submitted to legislations with extra-territorial reach.

Reversibility: shop.rapid.space

Let us have a look at what Hyper Open Cloud means in practice.

For example, in Rapid.Space you can purchase online the same hardware as the one used by Rapid.Space in their own data centers. Rapid.Space tells you which model is used from which supplier. Most of the hardware are open source. A few of them are licensed source, which means that it is possible to access the industrial design source files but under a non open source license. 

OM: handbook.rapid.space

The right to study of Hyper Open Cloud takes in the case of Rapid;Space the form a handbook and of public audits.

Rapid.Space handbook not only describes how to use the cloud service. It also describes how to operate a point of presence, step by step. Management processes of Rapid.Space International company are also being added little by little to the handbook, as in a work-in-progress through constant updates.


In order to achieve sovereignty, servers are owned by independent entities. No passwords are stored on Rapid.Space management platform.

Rapid.Space servers in France are owned by Nexedi, a French company with more than 90% of French stockholders.

Rapid.Space servers in China are owned by Xunkongjian, a Chinese national company.

If French secret services were requesting Rapid.Space to spy servers of Xunkongjian in China, Rapid.Space would answer "sorry, we do not have the passwords".

But if French secret services were requesting physical access to Nexedi servers, then Nexedi would say "OK". French secret services would then find out that smart Rapid.Space customers configured remotely an encryption key for the storage subsystem, which neither Nexedi or Rapid.Space have access to. Same for the X509 credentials.

With this approach, most problems of trade secret violation in current conventional clouds can be solved.

Is it the only way?

  Right to use Right to Reproduce Right to Study Right to Modify
No source    
Licensed source      
Proprietary hardware  
Secret Operation Management    
No public audits      
No contributors        
No Zero Knowledge        

There might be alternative ways to implement Hyper Open Cloud.

Yet, what seems fundamental to respect the mentioned four rights of Hyper Open Cloud is to ensure that source code is avaialable under open source licences for everything (software, hardware, operation management) and that public audits can be conducted to ensure that the service is implemented consistently with the operation management handbook.

The absence of contributors or of the Zero Knowledge technology may not prevent a cloud service to be Hyper Open.

What is the most important?

If you abandon... Then you lose...
Free Software / Licensed Source Portability, Reliability and Security
Open Source Operation Management Reversibility
Open Source / Licensed Source Hardware Security
Public audits Trust
Contributors Sovereignty
Zero Knowledge Trade Secret

We know that it is difficult to meet all the requirements of a Hyper Open Cloud. Sometimes a company has to chose: do nothing or move forward by breaking some rules of Hyper Open Cloud.

It is therefore useful to understand what one is going to lose by abandonning certain aspects of a typical Hyper Open Cloud solution.

If one accepts to use software without its source code, he or she should be prepared to face problems of portability, reliability and security.

If one accepts to rely on a cloud service with secret operation management, he or she should accept to lose reversibility and the ability to repatriate cloud loads.

If one accepts to use proprietary hardware instead of open source or licensed source hardware, he or she should accept an increased security risk in supply chain attacks.

If one can accept a cloud provider which rejects public audits, he or she should accept to work without trust, because there is no trust without control.

If one accepts a cloud provider which rejects national contributions, then there is no way to guarantee digital sovereignty. Digital sovereignty can only be achieved if citizens of a given nationality can own, study and possibly modify a subset of the cloud infrastructure located in their country.

And if one accepts a cloud which stores passwords in a central database, then he ou she should be prepared for violations of corporate trade secret.

In summary, conventional public clouds as of now cannot guarantee portability, reliability, security, reversibility, trust, sovereignty and trade secret. But if conventional public clouds adopt some of the approaches of Hyper Open Cloud, then they may also enjoy some of its benefits.

Adaptability and Long Term Cost

  Hyper Open Cloud Conventional Public Cloud
POC cost Low to Medium Zero to Low
MVP cost Low to Medium Zero to Low
Long term cost Low High

There is still no general rule to evaluate costs of Hyper Open Cloud. We can however compare the cost of Rapid.Space and conventional public clouds.

During the POC and MVP phase, conventional public clouds provide extensive documentation, dozens of tutorials and sometimes "on site" assistance or even software development. This can make the cost of a POC or MVP very low, sometime free.

However, after the POC and MVP phase, the price of conventional public clouds tend to be very high. Since there is no portability and reversibility, there is no way to change provider and lower costs.

In the case of Rapid.Space, a well trained Linux developer will find an efficient way to use VPS, CDN and SDN services and deliver at low cost both POC then an MVC. Less trained developers will take a bit more time.

After the POC and MVP phase, additional effort is required to automate all custom operation management using buildout scripts, especially for a scalable commercial products. This effort will be quickly compensated by lower long term costs thanks to the benefits of Hyper Open Cloud in termss of cost control and reversibility.


Rapid.Space may be a new name for you. We are going to present here who we are and what our goals are.

Rapid.Space infrastructure is growing. It is now deployed in Europe and Asia. It will soon be deployed in USA.


Rapid.Space was founded in 2020 by Nexedi, Amarisoft and a few VIPs from IT and telecom industries.

Nexedi brings to Rapid.Space its open source stack, in particular its billing platform, its edge-cloud platform and its big data platform, all open source.

Amarisoft brings to Rapid.Space its purely software defined 4G/5G stack which covers all aspects needed for commercial deployment, including SA, NSA, NBIoT, etc.


  • Sovereignty, trust and cost control through reversibility
  • A reversible alternative to conventional public clouds
  • A reversible alternative to Palantir platform
  • A reversible Edge platform for Industry 4.0
  • A reversible converged RAN platform (4G/5G)

The goal of Rapid.Space is to provide sovereignty and trust through full reversibility. You may consider this goal as providing the kind of thing that companies such as Huawei, Palantir or AWS are not able to provide due a combination of IP and legal policies.

This goal applies to every business which Rapid.Space is targeting.

Rapid.Space already provides a reversible cloud platform that can be used for public or private clouds. All components of this platform are open source, including the hardware, meaning that any customer can "clone" this platform on-premise or have it operated by a third party at no license cost.

Rapid.Space intends to provide a reversible big data platform with a scope simllar to Palantir.  All components of this platform are open source, including the hardware, meaning that any customer can "clone" this platform on-premise or have it operated by a third party  at no license cost.

Rapid.Space intends to provide a reversible Edge computing platform which includes everything needed for Industry 4.0, including PLC, sensors, actuators. Again, all components are open source.

Rapid.Space intends to provide a reversible RAN platform which supports 4G/5G and can be used for both private and public networks. Most components are open source. Some components may be licensed source, meaning that any customer can "clone" this platform on-premise and audit its source code at some license cost.

rapid.space / rapidspace.cn

Rapid.Space has two web sites: https://rapid.space (available worldwide except mainland China) and https://rapidspace.cn (mainland China). This provides a global coverage.

The primary service of Rapid.Space is a high performance virtual private server (VPS) at reasonable cost, combined with a CDN infrastructure for accelerated web content delivery. 

Global Datacenters

Rapid.Space is available in Europe (France, Germany, Sweden, Nertherlands, Bulgaria), in Shanxi (north of mainland China) with two data centers and in Taiwan.

Global IPv6 Backbone

Rapid.Space IPv6 backbone is based on a hybrid mesh network which relies on hundreds of routers worldwide. Thanks to babel technology (RFC 6126), all sorts of congestions can be avoided. Latency can be minimized. 

Global CDN

Rapid.Space provides HTTPS front-ends (HTTP1, HTTP2, HTTP3) in 10 different locations worldwide. In China, Rapid.Space front-ends are placed with all major carriers: CT, CU and CM.

Basic Services

Rapid.Space's concept is to provide to developers the minimum they need in order to deploy an application worldwide.

There are three basic services: Virtual Private Server (VPS), Content Delivery Network (CDN) and Software Defined Network (SDN).

VPS provides a way for developers to install their applications. It is similar to dedicated server services from companies such as OVH or Hetzner in Europe.

CDN provides a front-end solution to deliver data to end-users or to collect data from IoT. It is similar to Cloudflare or qiniu CDN in China.

SDN provides a way to interconnect Rapid.Space CDN and VPS through a latency-optimized IPv6 network. This service is quite unique: it also provides a way to interconnect Rapid.Space to other cloud services (AWS, Azure, GCP, OVH, Alicloud, UCloud, Qingcloud, etc.) with good networking performance.

Based on this minimal approach, developers should install by themselves open source software on VPS and build their applications: database (MariaDB, PosgreSQL, MongoDB, etc.), web server (Apache, Nginx, etc.), load balancer (Haproxy, ProxySQL, etc.). They should rely on the vast libraries available in python, PHP, ruby, Java, golang, nodejs, etc. to extend features.

Developers may use whichever tool they prefer for devops: SlapOS, OpenSVC, Docker, Kubernetes, Ansible, Chef, Puppet, buildout, etc. Even though Rapid.Space is based on SlapOS and buildout, Rapid.Space service can be used with other devops technologies.

The philosophy of Rapid.Space is thus the opposite of conventional cloud providers in the USA or in China. Rapid.Space provides very few services and lets developers rely on open source to achieve what they need. Thanks to this approach, developers can keep control on their applications and later move, if they wish, to another cloud platform. 

There is no vendor lock-in.

Advanced Services (preview)

Some types of services can be difficult or time-consuming to implement. This is the case of services that require clustering (use of multiple servers), hard real-time (industrial edge) or radio frequencies (4G/5G vRAN).

For each of these services, Rapid.Space provides a solution based on open source software.

Rapid.Space provides a "Big Data" platform that combines the features of a data lake with transactional object storage, high-availability scalable relational database and out-of-core data processing in python (AI, physical models).

Rapid.Space provides an "Edge" platform that is optimised for automation (factory, building, etc.) and remote deployment of AI models.

Rapid.Space provides a "vRAN" network management system suitable for 4G/5G private networks (factories, hospitals, etc.) or public networks (telecom, government).

All advanced services are available to selected B2B customers as preview. General availability is expected in 2021.

All services are provided with source code under open source license (Big Data, Edge) or business license (vRAN).

Custom Services

Read online: How does Rapid.Space and SlapOS compare to AWS?

Any service that does not fit into Rapid.Space basic services (VPS, CDN, SDN) or advanced services (Big Data, Edge, vRAN) can be developed as a custom service.

Based on an early assessment, 85% of cloud services provided by Amazon AWS could actually be implemented with Rapid.Space low cost, high performance cloud and the various open source stacks such as SlapOS (75% services) and a few third party Free Software (10% services).

Rapid.Space provides a Platform as a Service (PaaS) so that developers can add new services to Rapid.Space.

Server-based custom services are developed with buildout language and SlapOS nano-container technology. They cover features such as:

  • virtualisation (kvm)
  • automated disaster recovery
  • runtime (python, ruby, java, nodejs, golang, etc.)
  • database (mariadb, postgresql, NEO, etc.)

A collection of sample buildout profiles is provided. They cover a wide range of cloud services and even include an open source ERP.

One should however keep in mind that many cloud services are actually no longer required with the introduction of technologies such as Progressive Web Applications (PWA). Quite often, there is even no need to develop a custom cloud service for Rapid.Space. A PWA will do better. This is due to the fact that a lot a server based architectures can now be implemented as browser based. Not only this saves time, money, energy and CO2, it also provides better scalability and portability.

Why Chose Rapid.Space?

  • Hyper Open (portability, reliability, security, reversibility & trust)
  • Available everywhere, including in China
  • Trade Secret through Zero Knowledge
  • Contribute your own cloud services (IaaS, PaaS, SaaS)
  • Contribute your own data center
  • Deploy on-premise
  • Build a private infrastructure
  • Interoperable with 3rd party public or private clouds
  • Transparent cost structure (2 to 10 times lower price)

Rapid.Space is different. It is Hyper Open, which brings portability, reliability, security, reversibility, trust, sovereignty and trade secret.

Rapid.Space ensures global delivery of services (including in China). It protects trade secret of its customers thanks to Zero Knowledge technology.

It is fully reversible (customers can quit Rapid.Space easily) and it is open to all sorts of contributions or extensions of its open source technology.

Anyone can contribute to Rapid.Space their own service in addition to the 70+ existing ones.

Anyone can contribute servers and datacenter to extend the worldwide coverage of Rapid.Space, as long as Rapid.Space procedures are respected.

Rapid.Space can be deployed on-premise too in a way that is typical of hybrid cloud.

It is also possible for one to operate a completely private infrastructure based on Rapid.Space, as Teralab does.

It is even possible to deploy Rapid.Space services on third-party public or private clouds (AWS, OVH, Azure, Alicloud, Hertzner, Huawei, VMWare, etc.) and benefit from all Rapid.Space services including its IPv6 backbone, CDN, IaaS, PaaS, etc. 

All costs of Rapid.Space are transparent and described in "Business Model of a Low Cost Cloud Operator". The price of Rapid.Space is based on electricity, real estate, hardware amortisation, networking, operation management costs (software, human), hardware maintenance, financial costs. A 20% margin is added to cover all other risks related to the operation of a cloud service.Basically, there is no blocker, no secret, no anti-competitive practice of any sort in Rapid.Space.

Overall, Rapid.Space price is 2 to 10 times lower than conventional public clouds.


  Rapid.Space Conventional Public Cloud
Hardware OCP OCP, CISCO, Huawei, Inspur, Dell, etc. 
Basic Services VPS
Amazon EC2 AWS Command Line Interface Amazon WorkSpaces Amazon AppStream 2.0 Amazon WorkLink Amazon CloudWatch AWS Console Mobile Application AWS OpsWorks AWS Organizations etc.
Advanced Services Big Data
Athena CloudSearch Elasticsearch  EMR Kinesis Managed Streaming Redshift QuickSight Data Exchange Data Pipeline AWS Glue AWS Lake Formation AWS Step Functions EventBridge SQS Aurora Contact Lens AWS IoT Events AWS IoT SiteWise Amazon Elastic Inference Amazon Personalize 
Custom Services HTML5 PWA

Rapid.Space adopts minimalism whereas conventional public clouds are based on hundreds of services.

We believe that minimalism if more efficient whereas providing hundreds of services is just a trick used by conventional public clouds to achieve vendor lock-in.

The underlying idea in Rapid.Space is that most services are simple enough to be implemented by a single developer through package installation (deb, rpm, npm, etc.) or Progressive Web Applications (PWA). If some automation is needed, any open source technology (buildout, SlapOS, OpenSVC, Ansible, Docker, Kubernetes, etc.) can be used. Custom services of Rapid.Space can fit the gap between packages and operation management automation.

Only very few sophisticated services which require a lot of resources or know how should be provided by Rapid.Space to save developer's time: CDN, SDN, Big Data lake, Edge Computing and 4G/5G vRAN.

Guaranteed worldwide delivery

Thanks to its global IPv6 backbone and its CDN front-ends, it is possible to create simple applications that will select automatically the best front-end for each user. Thanks to this technology, users can always access corporate applications (ERP, CRM, etc.) with 100% success rate. This approach is much more suitable for corporate applications than DNS based technologies which only provide 99% success rate. 99% is fine for e-commerce. But if the accountant of a company can not access the ERP of a company (because he or she is the 1% of the 99%), it is not acceptable.

Big Data Platform

Wendelin is an open-source, 100% Python-based platform for data ingestion, storage, analysis, and visualisation. It covers the following features:

  • general purpose data hub (Wendelin)
  • object storage (file, ndarray, image, video, etc.)
  • relational storage (mariadb with repman)
  • streaming (fluentd)
  • batch transfer (embulk)
  • serverless python (scikit-learn, scikipy, keras etc.)
  • out-of-core processing (wendelin.core)
  • interactive visualisation (iodide)
  • high availability
  • high performance

Rapid.Space's Wendelin industrialises the complete chain of data-based applications from the collection of data at the edge over local IoT bridges to the central data analysis and visualisation in the cloud. Thanks to the integration of popular data science technologies such as Jupyter, SciPy, Pandas, Plotly and scikit-learn, the Wendelin platform provides an extensive collection of data analysis tools.

Wendelin is suited for industrialisation and unification of data-based services from data collection to visualisation. The ingestion and transformation of data is modelled by a business process-oriented approach, enabling traceability of data supply chains and pricing of data transformations.

Data Lake

Wendelin supports the real-time collection of data from multiple sources (machines, websites, e-commerce, clients and suppliers). Data engineers can choose from more than 100 ready-made plugins for different web services and databases thanks to the integration of ARM's open-source data collection solutions Fluentd (for streaming) and Embulk (for batch data). Collected raw data can then be aggregated and structured with PyData libraries such as Pandas and SciPy and finally be analysed automatically with machine learning tools such as scikit-learn or TensorFlow. Examples are intelligent searches and finding correlations.

Data Science Industrialisation

The business process-oriented approach of managing data analysis operations makes Wendelin a perfect fit for unification and automation of recurring data science tasks on a production system. Wendelin unifies data engineering through the usage of Python on both the analysis environment and the production cluster. All data is stored in Wendelin's distributed transactional object database NEO. Native out-of-core access to persistent NumPy ndarrays provided by the wendelin.core library allows for scalable analytics. Analysis operations can be implemented without restrictions on the available memory, and they do not need to be recompiled for running on the production cluster. With wendelin.core the complete NumPy API is available when accessing out-of-core ndarrays (unlike other solutions such as Apache Spark or Dask, which depend on a compatibility layer).

Scalable RDB

Wendelin includes a scalable cloud-based implementation of MariaDB which provides high availability and high performance relational storage, as well as ETL-type connectors to other relational storages.

Success Cases & Services

The components of the Wendelin platform, ERP5, NEO and SlapOS are successfully used in automotive (PSA, Toyota) and aerospace industries (Airbus Defence and Space). Wendelin itself is used in Germany for ice detection on wind turbines and noise and vibration monitoring of construction sites. Services provided by Nexedi cover automation of data collection at the edge, deployment and management of a Wendelin cloud, analysis and visualisation as well as consulting on single topics or implementation of full-fledged big data applications on the Wendelin stack.


Rapid.Space vRAN is an "all-in-one" solution to deploy public or private radio networks that can share the same frequency bands for both 4G and 5G thanks to DSS protocol.

Rapid.Space's vRAN solution is based on rugged industrial hardware from selected vendors (Edge-core, BJT, AW2S).

Rapid.Space includes automated network operation management suitable for private networks with thousands of users deployed over unlicensed frequencies.

A dedicated Rapid.Space infrastructure can also be provided to deploy public networks over licensed frequencies with millions of users. It is described in this sample offer.

By removing vendor lock-in and technology silos, Rapid.Space vRAN service can support innovative applications based on the convergence of cloud computing, edge computing and telecommunications.

Industrial Edge

Real-time industrial automation for Industry 4.0real time kernel

Rapid.Space's Edge operates seamlessly as an extension of Rapid.Space cloud that can offload at the Edge critical services that can not be hosted on the cloud.

Rapid.Space's Edge solution is based on rugged industrial hardware from selected vendors of open source hardware (Edge-core, Olimex).

It is designed for applications such as Industry 4.0, smart buildings, hospitals, etc. It supports continuous operation of critical functions that need to remain available even in case of Internet or cloud downtime. 

This includes industrial automation (virtual PLC), AI model as a service, IoT buffering, resilient networking and safety functions.

Experienced Cloud Integrators

In conclusion, I would like to talk about Rapid.Space network of experienced cloud integrators. Rapid.Space only selects cloud integrators and engineers with a long experience in at least 3 of the open source core technologies used by Rapid.Space.

Rapid.Space cloud integrators can provide training, cloud migration services, operation management automation, big data lake implementation, 4G/5G vRAN deployment and industrial edge computing. 

Current network covers most of the Europan Union, Russia, Japan, mainland China, Taiwan, Argentina and Brazil. We expect it to extend soon in North America and Africa.

Thank You

  • Rapid.Space International
  • Paris
  • +33629024425 - jp@rapid.space

For more information, please contact Jean-Paul, CEO of Rapid.Space (+33 629 02 44 25 or jp@rapid.space).