Flipt is built and delivered as a single standalone binary. Head to the Self Hosted section for more details.

Running Flipt with the default configuration is a great way to get to grips with using it. However, the default settings have some limitations which might make it impractical in a production environment. This guide explores some deployment configurations for increased scalability and reliability.

Defaults

By default, Flipt runs as a single instance process backed by SQLite. Additionally, all API interactions go directly to this database.

Flipt single replica deployment

However, Flipt can be run in front of an externally managed relational database (e.g. PostgreSQL, MySQL or CockroachDB), allowing operators to run multiple instances.

Caching can also be configured: both in-memory or shared via a distributed solution such as Redis. Enabling caching will reduce the number of interactions required on the database, making reads significantly faster.

Flipt multiple instance configuration diagram

Horizontal Scalability

As mentioned, Flipt runs on top of SQLite by default. SQLite is an embedded relational database, backed by a single file on disk. Access to the database is limited to a single writing process on the same machine. Using SQLite in this way means Flipt can only be run as a single instance.

In many scenarios, it’s advantageous to run multiple instances of a service like Flipt. Doing so can provide redundancy during critical failures. It also allows an operator to scale the number of instances to meet throughput demands.

An externally hosted relational database is required to scale Flipt horizontally (run multiple instances with a shared backend). PostgreSQL, MySQL and CockroachDB are the currently supported relational backends. Check out Configuration: Storage for more details on how to configure Flipt’s available storage backends.

Once configured with one of these databases, you can run multiple instances safely. Flipt takes care of schema management and some state management operations (e.g. automated deletion of expired API client tokens).

To run multiple instances of Flipt you will need to do so behind a load-balancer. Nginx, Caddy and Envoy are examples of suitable load-balancer choices.

Caching

Flipt supports both in-memory caching and the ability to use an external system such as Redis. Caching allows Flipt to reuse computed results made in a short period. This reduces the number of requests to the backing database and minimizes waste caused by excess evaluation.

In-memory caching can be enabled via the “caching” section of the configuration file and requires no external dependencies.

Details on these configuration options can be found in the Configuration: Cache section.

However, there is a limitation when multiple instances of Flipt are run in parallel. Since each instance of Flipt has an isolated in-memory cache, the benefits diminish the more instances of Flipt you run.

Using a remote system such as Redis to store the cache data, the same cache instance between multiple Flipt replicas can be shared. A single (logical) shared instance of Redis is required to see the benefits of this kind of caching.

Redis is currently the only caching backend. We’re considering adding more viable options (such as Memcached) in the future. Please open an issue if you have a specific caching backend you would like to see supported.

Health Checks

Flipt exposes health check endpoints for both HTTP and gRPC. These endpoints are useful for orchestrators such as Kubernetes to determine if a Flipt instance is healthy and ready to serve traffic.

Health checks are not only useful in a Kubernetes environment but can be used in any environment where you need to determine the health of a Flipt instance.

HTTP

Flipt exposes a health check endpoint at /health which can be used to determine the health of a Flipt instance. The endpoint returns a 200 status code and a JSON body with a status field if the instance is healthy and ready to serve traffic.

$ curl -v http://localhost:8080/health

> GET /health HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Content-Length: 21
<
{"status":"SERVING"}

gRPC

Flipt exposes a health check endpoint at /grpc.health.v1.Health/Check which can be used to determine the health of a Flipt instance. The endpoint returns a SERVING status code if the instance is healthy and ready to serve traffic.

$ grpcurl -plaintext localhost:9000 grpc.health.v1.Health/Check

{
  "status": "SERVING"
}

Read more about the gRPC Health Checking Protocol for more details.

Kubernetes

Flipt already supports running as a Docker container, so the lift to run within a Kubernetes environment proves to be quite simple.

The easiest way to get started is to use our official Helm chart. The chart provisions a Kubernetes Deployment of Flipt along with a Kubernetes Service.

If you are already familiar with Kubernetes, you can get started by using the flipt:latest image (or pin to a specific version).

Sidecar Deployment

We published a blog post that explores an efficient way to run Flipt as a sidecar.

Flipt sidecar deployment configuration

The Flipt (master) is the source of truth of all feature flag state where Flipt users make edits through the UI.

The Flipt Exporter is a Kubernetes CronJob which uses the flipt export command to upload potential changed state to a S3 bucket. Several instances of Flipt can then be run using the S3 bucket as a source of truth. One such example is the Flipt Sidecar in the diagram which is collocated with a main process (in the same Kubernetes pod) that uses a Flipt client.

The overall idea is for the main process to achieve faster evaluations going over the “localhost” rather than through a central Flipt process which could be running anywhere in a distributed sense.

Further Considerations

Flipt primarily relies on the backing database to achieve scalability. It offers caching to minimize the dependence on the backing store and avoid re-work. However, attention should be paid to the health and performance of the backing database and the interactions between Flipt and storage.

Feature flag systems are primarily read-often and written infrequently. This allows some affordance to how such a system can be deployed and operated. When deployed against a remote database, Flipt operates as a stateless system, allowing operators to deploy multiple instances of Flipt in various configurations. A more advanced deployment scenario might see Flipt run in two alternate tiers, one for servicing your application’s flag evaluations (read-tier) and another for the Flipt dashboard and making flag state changes (write-tier).

This has the potential for multiple benefits:

  • Failures in either read or write tiers can be isolated from one another. A failure in the tier serving the dashboard wouldn’t necessarily affect flag evaluations.
  • Reads can be configured with access to the cache, where writes can be isolated from the caching tier. This may have benefits to the number of connections required on your caching layer.
  • Reads can be deployed in front of read-only replicas of a database, with the write-tier connecting directly to the primary, allowing more potential for scale in the database layer.

Flipt separate read/write tier deployment configuration

Flipt ships with metrics, logging and tracing around both behavior and system performance metrics. These pieces of telemetry can be useful for understanding the constraints within your setup of Flipt. We recommend reading the Configuration: Observability section to understand more about how to extract these measurements.